Experimental model updating using frequency response functions
Hong, Yu; Liu, Xi; Dong, Xinjun; Wang, Yang; Pu, Qianhui
2016-04-01
In order to obtain a finite element (FE) model that can more accurately describe structural behaviors, experimental data measured from the actual structure can be used to update the FE model. The process is known as FE model updating. In this paper, a frequency response function (FRF)-based model updating approach is presented. The approach attempts to minimize the difference between analytical and experimental FRFs, while the experimental FRFs are calculated using simultaneously measured dynamic excitation and corresponding structural responses. In this study, the FRF-based model updating method is validated through laboratory experiments on a four-story shear-frame structure. To obtain the experimental FRFs, shake table tests and impact hammer tests are performed. The FRF-based model updating method is shown to successfully update the stiffness, mass and damping parameters of the four-story structure, so that the analytical and experimental FRFs match well with each other.
Adjustment or updating of models
Indian Academy of Sciences (India)
D J Ewins
2000-06-01
In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.
Stochastic model updating using distance discrimination analysis
Institute of Scientific and Technical Information of China (English)
Deng Zhongmin; Bi Sifeng; Sez Atamturktur
2014-01-01
This manuscript presents a stochastic model updating method, taking both uncertainties in models and variability in testing into account. The updated finite element (FE) models obtained through the proposed technique can aid in the analysis and design of structural systems. The authors developed a stochastic model updating method integrating distance discrimination analysis (DDA) and advanced Monte Carlo (MC) technique to (1) enable more efficient MC by using a response surface model, (2) calibrate parameters with an iterative test-analysis correlation based upon DDA, and (3) utilize and compare different distance functions as correlation metrics. Using DDA, the influence of distance functions on model updating results is analyzed. The proposed sto-chastic method makes it possible to obtain a precise model updating outcome with acceptable cal-culation cost. The stochastic method is demonstrated on a helicopter case study updated using both Euclidian and Mahalanobis distance metrics. It is observed that the selected distance function influ-ences the iterative calibration process and thus, the calibration outcome, indicating that an integra-tion of different metrics might yield improved results.
Calculation and Updating of Reliability Parameters in Probabilistic Safety Assessment
Zubair, Muhammad; Zhang, Zhijian; Khan, Salah Ud Din
2011-02-01
The internal events of nuclear power plant are complex and include equipment maintenance, equipment damage etc. These events will affect the probability of the current risk level of the system as well as the reliability of the equipment parameter values so such kind of events will serve as an important basis for systematic analysis and calculation. This paper presents a method for reliability parameters calculation and their updating. The method is based on binomial likelihood function and its conjugate beta distribution. For update parameters Bayes' theorem has been selected. To implement proposed method a computer base program is designed which provide help to estimate reliability parameters.
Model validation: Correlation for updating
Indian Academy of Sciences (India)
D J Ewins
2000-06-01
In this paper, a review is presented of the various methods which are available for the purpose of performing a systematic comparison and correlation between two sets of vibration data. In the present case, the application of interest is in conducting this correlation process as a prelude to model correlation or updating activity.
CTL Model Update for System Modifications
Ding, Yulin; Zhang, Yan; Zhang, Y; 10.1613/jair.2420
2011-01-01
Model checking is a promising technology, which has been applied for verification of many hardware and software systems. In this paper, we introduce the concept of model update towards the development of an automatic system modification tool that extends model checking functions. We define primitive update operations on the models of Computation Tree Logic (CTL) and formalize the principle of minimal change for CTL model update. These primitive update operations, together with the underlying minimal change principle, serve as the foundation for CTL model update. Essential semantic and computational characterizations are provided for our CTL model update approach. We then describe a formal algorithm that implements this approach. We also illustrate two case studies of CTL model updates for the well-known microwave oven example and the Andrew File System 1, from which we further propose a method to optimize the update results in complex system modifications.
Model Updating Nonlinear System Identification Toolbox Project
National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...
Update of the SPS Impedance Model
Salvant, B; Zannini, C; Arduini, G; Berrig, O; Caspers, F; Grudiev, A; Métral, E; Rumolo, G; Shaposhnikova, E; Zotter, B; Migliorati, M; Spataro, B
2010-01-01
The beam coupling impedance of the CERN SPS is expected to be one of the limitations to an intensity upgrade of the LHC complex. In order to be able to reduce the SPS impedance, its main contributors need to be identified. An impedance model for the SPS has been gathered from theoretical calculations, electromagnetic simulations and bench measurements of single SPS elements. The current model accounts for the longitudinal and transverse impedance of the kickers, the horizontal and vertical electrostatic beam position monitors, the RF cavities and the 6.7 km beam pipe. In order to assess the validity of this model, macroparticle simulations of a bunch interacting with this updated SPS impedance model are compared to measurements performed with the SPS beam.
Model updating of nonlinear structures from measured FRFs
Canbaloğlu, Güvenç; Özgüven, H. Nevzat
2016-12-01
There are always certain discrepancies between modal and response data of a structure obtained from its mathematical model and experimentally measured ones. Therefore it is a general practice to update the theoretical model by using experimental measurements in order to have a more accurate model. Most of the model updating methods used in structural dynamics are for linear systems. However, in real life applications most of the structures have nonlinearities, which restrict us applying model updating techniques available for linear structures, unless they work in linear range. Well-established frequency response function (FRF) based model updating methods would easily be extended to a nonlinear system if the FRFs of the underlying linear system (linear FRFs) could be experimentally measured. When frictional type of nonlinearity co-exists with other types of nonlinearities, it is not possible to obtain linear FRFs experimentally by using low level forcing. In this study a method (named as Pseudo Receptance Difference (PRD) method) is presented to obtain linear FRFs of a nonlinear structure having multiple nonlinearities including friction type of nonlinearity. PRD method, calculates linear FRFs of a nonlinear structure by using FRFs measured at various forcing levels, and simultaneously identifies all nonlinearities in the system. Then, any model updating method can be used to update the linear part of the mathematical model. In this present work, PRD method is used to predict the linear FRFs from measured nonlinear FRFs, and the inverse eigensensitivity method is employed to update the linear finite element (FE) model of the nonlinear structure. The proposed method is validated with different case studies using nonlinear lumped single-degree of freedom system, as well as a continuous system. Finally, a real nonlinear T-beam test structure is used to show the application and the accuracy of the proposed method. The accuracy of the updated nonlinear model of the
Perna, L.; Pezzopane, M.; Pietrella, M.; Zolesi, B.; Cander, L. R.
2017-09-01
The SIRM model proposed by Zolesi et al. (1993, 1996) is an ionospheric regional model for predicting the vertical-sounding characteristics that has been frequently used in developing ionospheric web prediction services (Zolesi and Cander, 2014). Recently the model and its outputs were implemented in the framework of two European projects: DIAS (DIgital upper Atmosphere Server; http://www.iono.noa.gr/DIAS/ (Belehaki et al., 2005, 2015) and ESPAS (Near-Earth Space Data Infrastructure for e-Science; http://www.espas-fp7.eu/) (Belehaki et al., 2016). In this paper an updated version of the SIRM model, called SIRMPol, is described and corresponding outputs in terms of the F2-layer critical frequency (foF2) are compared with values recorded at the mid-latitude station of Rome (41.8°N, 12.5°E), for extremely high (year 1958) and low (years 2008 and 2009) solar activity. The main novelties introduced in the SIRMPol model are: (1) an extension of the Rome ionosonde input dataset that, besides data from 1957 to 1987, includes also data from 1988 to 2007; (2) the use of second order polynomial regressions, instead of linear ones, to fit the relation foF2 vs. solar activity index R12; (3) the use of polynomial relations, instead of linear ones, to fit the relations A0 vs. R12, An vs. R12 and Yn vs. R12, where A0, An and Yn are the coefficients of the Fourier analysis performed by the SIRM model to reproduce the values calculated by using relations in (2). The obtained results show that SIRMPol outputs are better than those of the SIRM model. As the SIRMPol model represents only a partial updating of the SIRM model based on inputs from only Rome ionosonde data, it can be considered a particular case of a single-station model. Nevertheless, the development of the SIRMPol model allowed getting some useful guidelines for a future complete and more accurate updating of the SIRM model, of which both DIAS and ESPAS could benefit.
Dynamic Model Updating Using Virtual Antiresonances
Directory of Open Access Journals (Sweden)
Walter D’Ambrogio
2004-01-01
Full Text Available This paper considers an extension of the model updating method that minimizes the antiresonance error, besides the natural frequency error. By defining virtual antiresonances, this extension allows the use of previously identified modal data. Virtual antiresonances can be evaluated from a truncated modal expansion, and do not correspond to any physical system. The method is applied to the Finite Element model updating of the GARTEUR benchmark, used within an European project on updating. Results are compared with those previously obtained by estimating actual antiresonances after computing low and high frequency residuals, and with results obtained by using the correlation (MAC between identified and analytical mode shapes.
Model Updating Nonlinear System Identification Toolbox Project
National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...
MARMOT update for oxide fuel modeling
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Chao [Idaho National Lab. (INL), Idaho Falls, ID (United States); Aagesen, Larry [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ahmed, Karim [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Tonks, Michael [Pennsylvania State Univ., University Park, PA (United States); Millett, Paul [Univ. of Arkansas, Fayetteville, AR (United States)
2016-09-01
This report summarizes the lower-length-scale research and development progresses in FY16 at Idaho National Laboratory in developing mechanistic materials models for oxide fuels, in parallel to the development of the MARMOT code which will be summarized in a separate report. This effort is a critical component of the microstructure based fuel performance modeling approach, supported by the Fuels Product Line in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. The progresses can be classified into three categories: 1) development of materials models to be used in engineering scale fuel performance modeling regarding the effect of lattice defects on thermal conductivity, 2) development of modeling capabilities for mesoscale fuel behaviors including stage-3 gas release, grain growth, high burn-up structure, fracture and creep, and 3) improved understanding in material science by calculating the anisotropic grain boundary energies in UO$_2$ and obtaining thermodynamic data for solid fission products. Many of these topics are still under active development. They are updated in the report with proper amount of details. For some topics, separate reports are generated in parallel and so stated in the text. The accomplishments have led to better understanding of fuel behaviors and enhance capability of the MOOSE-BISON-MARMOT toolkit.
Neuman systems model in holland: an update.
Merks, André; Verberk, Frans; de Kuiper, Marlou; Lowry, Lois W
2012-10-01
The authors of this column, leading members of the International Neuman Systems Model Association, provide an update on the use of Neuman systems model in Holland and document the various changes in The Netherlands that have influenced the use of the model in that country. The model's link to systems theory and stress theory are discussed, as well as a shift to greater emphasis on patient self-management. The model is also linked to healthcare quality improvement and interprofessional collaboration in Holland.
High-speed AMB machining spindle model updating and model validation
Wroblewski, Adam C.; Sawicki, Jerzy T.; Pesch, Alexander H.
2011-04-01
High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) have been envisioned to be capable of automated self-identification and self-optimization in efforts to accurately calculate parameters for stable high-speed machining operation. With this in mind, this work presents rotor model development accompanied by automated model-updating methodology followed by updated model validation. The model updating methodology is developed to address the dynamic inaccuracies of the nominal open-loop plant model when compared with experimental open-loop transfer function data obtained by the built in AMB sensors. The nominal open-loop model is altered by utilizing an unconstrained optimization algorithm to adjust only parameters that are a result of engineering assumptions and simplifications, in this case Young's modulus of selected finite elements. Minimizing the error of both resonance and anti-resonance frequencies simultaneously (between model and experimental data) takes into account rotor natural frequencies and mode shape information. To verify the predictive ability of the updated rotor model, its performance is assessed at the tool location which is independent of the experimental transfer function data used in model updating procedures. Verification of the updated model is carried out with complementary temporal and spatial response comparisons substantiating that the updating methodology is effective for derivation of open-loop models for predictive use.
A Provenance Tracking Model for Data Updates
Directory of Open Access Journals (Sweden)
Gabriel Ciobanu
2012-08-01
Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.
Finite Element Model Updating Using Response Surface Method
Marwala, Tshilidzi
2007-01-01
This paper proposes the response surface method for finite element model updating. The response surface method is implemented by approximating the finite element model surface response equation by a multi-layer perceptron. The updated parameters of the finite element model were calculated using genetic algorithm by optimizing the surface response equation. The proposed method was compared to the existing methods that use simulated annealing or genetic algorithm together with a full finite element model for finite element model updating. The proposed method was tested on an unsymmetri-cal H-shaped structure. It was observed that the proposed method gave the updated natural frequen-cies and mode shapes that were of the same order of accuracy as those given by simulated annealing and genetic algorithm. Furthermore, it was observed that the response surface method achieved these results at a computational speed that was more than 2.5 times as fast as the genetic algorithm and a full finite element model and 24 ti...
Chuang, Yao-Yuan
2007-08-01
Variational transition state theory with multidimensional tunneling (VTST/MT) has been used for calculating the rate constants of reactions. The updated Hessians have been used to reduce the computational costs for both geometry optimization and trajectory following procedures. In this paper, updated Hessians are used to reduce the computational costs while calculating the rate constants applying VTST/MT. Although we found that directly applying the updated Hessians will not generate good vibrational frequencies along the minimum energy path (MEP), however, we can either re-compute the full Hessian matrices at fixed intervals or calculate the Block Hessians, which is constructed by numerical one-side difference for the Hessian elements in the "critical" region and Bofill updating scheme for the rest of the Hessian elements. Due to the numerical instability of the Bofill update method near the saddle point region, we have suggested a simple strategy in which we follow the MEP until certain percentage of the classical barrier height from the barrier top with full Hessians computed and then performing rate constant calculation with the extended MEP using Block Hessians. This strategy results a mean unsigned percentage deviation (MUPD) around 10% with full Hessians computed till the point with 80% classical barrier height for four studied reactions. This proposed strategy is attractive not only it can be implemented as an automatic procedure but also speeds up the VTST/MT calculation via embarrassingly parallelization to a personal computer cluster.
An Updated AP2 Beamline TURTLE Model
Energy Technology Data Exchange (ETDEWEB)
Gormley, M.; O' Day, S.
1991-08-23
This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.
A Successive Selection Method for finite element model updating
Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo
2016-03-01
Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.
OSPREY Model Development Status Update
Energy Technology Data Exchange (ETDEWEB)
Veronica J Rutledge
2014-04-01
During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. Georgia Institute of Technology is developing fundamental level model to describe the equilibrium and kinetics of the adsorption process, which are to be integrated with OSPREY. This report discusses the progress made on expanding OSPREY to be multiple component and the integration of macroscale and microscale level models. Also included in this report is a brief OSPREY user guide.
An updated pH calculation tool for new challenges
Energy Technology Data Exchange (ETDEWEB)
Crolet, J.L. [Consultant, 36 Chemin Mirassou, 64140 Lons (France)
2004-07-01
The time evolution of the in-situ pH concept is summarised, as well as the past and present challenges of pH calculations. Since the beginning of such calculations on spread sheets, the tremendous progress in the computer technology has progressively removed all its past limitations. On the other hand, the development of artificial acetate buffering in standardized and non-standardized corrosion testing has raised quite a few new questions. Especially, a straightforward precautionary principle now requires to limit all what is artificial to situations where this is really necessary and, consequently, seriously consider the possibility of periodic pH readjustment as an alternative to useless or excessive artificial buffering, including in the case of an over-acidification at ambient pressure through HCl addition only (e.g. SSC testing of martensitic stainless steels). These new challenges require a genuine 'pH engineering' for the design of corrosion testing protocols under CO{sub 2} and H{sub 2}S partial pressures, at ambient pressure or in autoclave. In this aim, not only a great many detailed pH data shall be automatically delivered to unskilled users, but this shall be done in an experimental context which is most often new and much more complicated than before: e.g. pH adjustment of artificial buffers before saturation in the test gas and further pH evolution under acid gas pressure (pH shift before test beginning), anticipation of the pH readjustment frequency from just a volume / surface ratio and an expected corrosion rate (pH drift during the test). Furthermore, in order to be really useful and reliable, such numerous pH data have also to be well understood. Therefore, their origin, significance and parametric sensitivity are backed up and explained through three self-understanding graphical illustrations: 1. an 'anion - pH' nomogram shows the pH dependence of all the variable ions, H{sup +}, HCO{sub 3}{sup -}, HS{sup -}, Ac{sup -} (and
Bond-updating mechanism in cluster Monte Carlo calculations
Heringa, J. R.; Blöte, H. W. J.
1994-03-01
We study a cluster Monte Carlo method with an adjustable parameter: the number of energy levels of a demon mediating the exchange of bond energy with the heat bath. The efficiency of the algorithm in the case of the three-dimensional Ising model is studied as a function of the number of such levels. The optimum is found in the limit of an infinite number of levels, where the method reproduces the Wolff or the Swendsen-Wang algorithm. In this limit the size distribution of flipped clusters approximates a power law more closely than that for a finite number of energy levels.
Energy Technology Data Exchange (ETDEWEB)
None, None
2011-06-30
The Miami Science Museum energy model has been used during DD to test the building's potential for energy savings as measured by ASHRAE 90.1-2007 Appendix G. This standard compares the designed building's yearly energy cost with that of a code-compliant building. The building is currently on track show 20% or better improvement over the ASHRAE 90.1-2007 Appendix G baseline; this performance would ensure minimum compliance with both LEED 2.2 and current Florida Energy Code, which both reference a less strict version of ASHRAE 90.1. In addition to being an exercise in energy code compliance, the energy model has been used as a design tool to show the relative performance benefit of individual energy conservation measures (ECMs). These ECMs are areas where the design team has improved upon code-minimum design paths to improve the energy performance of the building. By adding ECMs one a time to a code-compliant baseline building, the current analysis identifies which ECMs are most effective in helping the building meet its energy performance goals.
Finite element model updating of existing steel bridge based on structural health monitoring
Institute of Scientific and Technical Information of China (English)
HE Xu-hui; YU zhi-wu; CHEN Zheng-qing
2008-01-01
Based on the physical meaning of sensitivity, a new finite element (FE) model updating method was proposed. In this method, a three-dimensional FE model of the Nanjing Yangtze River Bridge (NYRB) with ANSYS program was established and updated by modifying some design parameters. To further validate the updated FE model, the analytical stress-time histories responses of main members induced by a moving train were compared with the measured ones. The results show that the relative error of maximum stress is 2.49% and the minimum relative coefficient of analytical stress-time histories responses is 0.793. The updated model has a good agreement between the calculated data and the tested data, and provides a current baseline FE model for long-term health monitoring and condition assessment of the NYRB. At the same time, the model is validated by stress-time histories responses to be feasible and practical for railway steel bridge model updating.
Finite element modelling and updating of a lively footbridge: The complete process
Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul
2007-03-01
The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This
Assessment of stochastically updated finite element models using reliability indicator
Hua, X. G.; Wen, Q.; Ni, Y. Q.; Chen, Z. Q.
2017-01-01
Finite element (FE) model updating techniques have been a viable approach to correcting an initial mathematical model based on test data. Validation of the updated FE models is usually conducted by comparing model predictions with independent test data that have not been used for model updating. This approach of model validation cannot be readily applied in the case of a stochastically updated FE model. In recognizing that structural reliability is a major decision factor throughout the lifecycle of a structure, this study investigates the use of structural reliability as a measure for assessing the quality of stochastically updated FE models. A recently developed perturbation method for stochastic FE model updating is first applied to attain the stochastically updated models by using the measured modal parameters with uncertainty. The reliability index and failure probability for predefined limit states are computed for the initial and the stochastically updated models, respectively, and are compared with those obtained from the 'true' model to assess the quality of the two models. Numerical simulation of a truss bridge is provided as an example. The simulated modal parameters involving different uncertainty magnitudes are used to update an initial model of the bridge. It is shown that the reliability index obtained from the updated model is much closer to true reliability index than that obtained from the initial model in the case of small uncertainty magnitude; in the case of large uncertainty magnitude, the reliability index computed from the initial model rather than from the updated model is closer to the true value. The present study confirms the usefulness of measurement-calibrated FE models and at the same time also highlights the importance of the uncertainty reduction in test data for reliable model updating and reliability evaluation.
Can model updating tell the truth?
Energy Technology Data Exchange (ETDEWEB)
Hemez, F.M.
1998-02-01
This paper discusses to which extent updating methods may be able to correct finite element models in such a way that the test structure is better simulated. After having unified some of the most popular modal residues used as the basis for optimization algorithms, the relationship between modal residues and model correlation is investigated. This theoretical approach leads to an error estimator that may be implemented to provide an a priori upper bound of a model`s predictive quality relative to test data. These estimates however assume that a full measurement set is available. Finally, an application example is presented that illustrates the effectiveness of the estimator proposed when less measurement points than degrees of freedom are available.
Frequency response function-based model updating using Kriging model
Wang, J. T.; Wang, C. J.; Zhao, J. P.
2017-03-01
An acceleration frequency response function (FRF) based model updating method is presented in this paper, which introduces Kriging model as metamodel into the optimization process instead of iterating the finite element analysis directly. The Kriging model is taken as a fast running model that can reduce solving time and facilitate the application of intelligent algorithms in model updating. The training samples for Kriging model are generated by the design of experiment (DOE), whose response corresponds to the difference between experimental acceleration FRFs and its counterpart of finite element model (FEM) at selected frequency points. The boundary condition is taken into account, and a two-step DOE method is proposed for reducing the number of training samples. The first step is to select the design variables from the boundary condition, and the selected variables will be passed to the second step for generating the training samples. The optimization results of the design variables are taken as the updated values of the design variables to calibrate the FEM, and then the analytical FRFs tend to coincide with the experimental FRFs. The proposed method is performed successfully on a composite structure of honeycomb sandwich beam, after model updating, the analytical acceleration FRFs have a significant improvement to match the experimental data especially when the damping ratios are adjusted.
Resource Tracking Model Updates and Trade Studies
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.
Model updating of rotor systems by using Nonlinear least square optimization
Jha, A. K.; Dewangan, P.; Sarangi, M.
2016-07-01
Mathematical models of structure or machineries are always different from the existing physical system, because the approach of numerical predictions to the behavior of a physical system is limited by the assumptions used in the development of the mathematical model. Model updating is, therefore necessary so that updated model should replicate the physical system. This work focuses on the model updating of rotor systems at various speeds as well as at different modes of vibration. Support bearing characteristics severely influence the dynamics of rotor systems like turbines, compressors, pumps, electrical machines, machine tool spindles etc. Therefore bearing parameters (stiffness and damping) are considered to be updating parameters. A finite element model of rotor systems is developed using Timoshenko beam element. Unbalance response in time domain and frequency response function have been calculated by numerical techniques, and compared with the experimental data to update the FE-model of rotor systems. An algorithm, based on unbalance response in time domain is proposed for updating the rotor systems at different running speeds of rotor. An attempt has been made to define Unbalance response assurance criterion (URAC) to check the degree of correlation between updated FE model and physical model.
Updating river basin models with radar altimetry
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.
response of a catchment to meteorological forcing. While river discharge cannot be directly measured from space, radar altimetry (RA) can measure water level variations in rivers at the locations where the satellite ground track and river network intersect called virtual stations or VS. In this PhD study...... been between 10 and 35 days for altimetry missions until now. The location of the VS is also not necessarily the point at which measurements are needed. On the other hand, one of the main strengths of the dataset is its availability in near-real time. These characteristics make radar altimetry ideally...... suited for use in data assimilation frameworks which combine the information content from models and current observations to produce improved forecasts and reduce prediction uncertainty. The focus of the second and third papers of this thesis was therefore the use of radar altimetry as update data...
Operational modal analysis by updating autoregressive model
Vu, V. H.; Thomas, M.; Lakis, A. A.; Marcouiller, L.
2011-04-01
This paper presents improvements of a multivariable autoregressive (AR) model for applications in operational modal analysis considering simultaneously the temporal response data of multi-channel measurements. The parameters are estimated by using the least squares method via the implementation of the QR factorization. A new noise rate-based factor called the Noise rate Order Factor (NOF) is introduced for use in the effective selection of model order and noise rate estimation. For the selection of structural modes, an orderwise criterion called the Order Modal Assurance Criterion (OMAC) is used, based on the correlation of mode shapes computed from two successive orders. Specifically, the algorithm is updated with respect to model order from a small value to produce a cost-effective computation. Furthermore, the confidence intervals of each natural frequency, damping ratio and mode shapes are also computed and evaluated with respect to model order and noise rate. This method is thus very effective for identifying the modal parameters in case of ambient vibrations dealing with modern output-only modal analysis. Simulations and discussions on a steel plate structure are presented, and the experimental results show good agreement with the finite element analysis.
Declarative XML Update Language Based on a Higher Data Model
Institute of Scientific and Technical Information of China (English)
Guo-Ren Wang; Xiao-Lin Zhang
2005-01-01
With the extensive use of XML in applications over the Web, how to update XML data is becoming an important issue because the role of XML has expanded beyond traditional applications in which XML is used for information exchange and data representation over the Web. So far, several languages have been proposed for updating XML data, but they are all based on lower, so-called graph-based or tree-based data models. Update requests are thus expressed in a nonintuitive and unnatural way and update statements are too complicated to comprehend. This paper presents a novel declarative XML update language which is an extension of the XML-RL query language. Compared with other existing XML update languages, it has the following features. First, it is the only XML data manipulation language based on a higher data model. Second, this language can express complex update requests at multiple levels in a hierarchy in a simple and flat way. Third, this language directly supports the functionality of updating complex objects while all other update languages do not support these operations. Lastly, most of existing languages use rename to modify attribute and element names, which is a different way from updates on value. The proposed language modifies tag names, values, and objects in a unified way by the introduction of three kinds of logical binding variables: object variables, value variables, and name variables.
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
Energy Technology Data Exchange (ETDEWEB)
Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
Dzifcakova, Elena
2013-01-01
New data for calculation of the ionization and recombination rates have have been published in the past few years. Most of these are included in CHIANTI database. We used these data to calculate collisional ionization and recombination rates for the non-Maxwellian kappa-distributions with an enhanced number of particles in the high-energy tail, which have been detected in the solar transition region and the solar wind. Ionization equilibria for elements H to Zn are derived. The kappa-distributions significantly influence both the ionization and recombination rates and widen the ion abundance peaks. In comparison with Maxwellian distribution, the ion abundance peaks can also be shifted to lower or higher temperatures. The updated ionization equilibrium calculations result in large changes for several ions, notably Fe VIII--XIV. The results are supplied in electronic form compatible with the CHIANTI database.
Finite element model updating using bayesian framework and modal properties
CSIR Research Space (South Africa)
Marwala, T
2005-01-01
Full Text Available Finite element (FE) models are widely used to predict the dynamic characteristics of aerospace structures. These models often give results that differ from measured results and therefore need to be updated to match measured results. Some...
Reservoir management under geological uncertainty using fast model update
Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.
2015-01-01
Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU del
Update Legal Documents Using Hierarchical Ranking Models and Word Clustering
Pham, Minh Quang Nhat; Nguyen, Minh Le; Shimazu, Akira
2010-01-01
Our research addresses the task of updating legal documents when newinformation emerges. In this paper, we employ a hierarchical ranking model tothe task of updating legal documents. Word clustering features are incorporatedto the ranking models to exploit semantic relations between words. Experimentalresults on legal data built from the United States Code show that the hierarchicalranking model with word clustering outperforms baseline methods using VectorSpace Model, and word cluster-based ...
Methods for the Update and Verification of Forest Surface Model
Rybansky, M.; Brenova, M.; Zerzan, P.; Simon, J.; Mikita, T.
2016-06-01
The digital terrain model (DTM) represents the bare ground earth's surface without any objects like vegetation and buildings. In contrast to a DTM, Digital surface model (DSM) represents the earth's surface including all objects on it. The DTM mostly does not change as frequently as the DSM. The most important changes of the DSM are in the forest areas due to the vegetation growth. Using the LIDAR technology the canopy height model (CHM) is obtained by subtracting the DTM and the corresponding DSM. The DSM is calculated from the first pulse echo and DTM from the last pulse echo data. The main problem of the DSM and CHM data using is the actuality of the airborne laser scanning. This paper describes the method of calculating the CHM and DSM data changes using the relations between the canopy height and age of trees. To get a present basic reference data model of the canopy height, the photogrammetric and trigonometric measurements of single trees were used. Comparing the heights of corresponding trees on the aerial photographs of various ages, the statistical sets of the tree growth rate were obtained. These statistical data and LIDAR data were compared with the growth curve of the spruce forest, which corresponds to a similar natural environment (soil quality, climate characteristics, geographic location, etc.) to get the updating characteristics.
Structural Dynamics Model Updating with Positive Definiteness and No Spillover
Directory of Open Access Journals (Sweden)
Yongxin Yuan
2014-01-01
Full Text Available Model updating is a common method to improve the correlation between structural dynamics models and measured data. In conducting the updating, it is desirable to match only the measured spectral data without tampering with the other unmeasured and unknown eigeninformation in the original model (if so, the model is said to be updated with no spillover and to maintain the positive definiteness of the coefficient matrices. In this paper, an efficient numerical method for updating mass and stiffness matrices simultaneously is presented. The method first updates the modal frequencies. Then, a method is presented to construct a transformation matrix and this matrix is used to correct the analytical eigenvectors so that the updated model is compatible with the measurement of the eigenvectors. The method can preserve both no spillover and the symmetric positive definiteness of the mass and stiffness matrices. The method is computationally efficient as neither iteration nor numerical optimization is required. The numerical example shows that the presented method is quite accurate and efficient.
Nonlinear structural finite element model updating and uncertainty quantification
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.
2015-04-01
This paper presents a framework for nonlinear finite element (FE) model updating, in which state-of-the-art nonlinear structural FE modeling and analysis techniques are combined with the maximum likelihood estimation method (MLE) to estimate time-invariant parameters governing the nonlinear hysteretic material constitutive models used in the FE model of the structure. The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem. A proof-of-concept example, consisting of a cantilever steel column representing a bridge pier, is provided to verify the proposed nonlinear FE model updating framework.
Coupled vibro-acoustic model updating using frequency response functions
Nehete, D. V.; Modak, S. V.; Gupta, K.
2016-03-01
Interior noise in cavities of motorized vehicles is of increasing significance due to the lightweight design of these structures. Accurate coupled vibro-acoustic FE models of such cavities are required so as to allow a reliable design and analysis. It is, however, experienced that the vibro-acoustic predictions using these models do not often correlate acceptably well with the experimental measurements and hence require model updating. Both the structural and the acoustic parameters addressing the stiffness as well as the damping modeling inaccuracies need to be considered simultaneously in the model updating framework in order to obtain an accurate estimate of these parameters. It is also noted that the acoustic absorption properties are generally frequency dependent. This makes use of modal data based methods for updating vibro-acoustic FE models difficult. In view of this, the present paper proposes a method based on vibro-acoustic frequency response functions that allow updating of a coupled FE model by considering simultaneously the parameters associated with both the structural as well as the acoustic model of the cavity. The effectiveness of the proposed method is demonstrated through numerical studies on a 3D rectangular box cavity with a flexible plate. Updating parameters related to the material property, stiffness of joints between the plate and the rectangular cavity and the properties of absorbing surfaces of the acoustic cavity are considered. The robustness of the method under presence of noise is also studied.
Updated parameters for HL-LHC aperture calculations for proton beams
Bruce, Roderik; De Maria, Riccardo; Giovannozzi, Massimo; Redaelli, Stefano; Tomas Garcia, Rogelio; Velotti, Francesco Maria; Wenninger, Jorg
2017-01-01
During the accelerator design, it is important to have a reliable way of estimating the available aperture, in order to ensure a sufficient beam clearing and avoid potentially limiting beam losses on the aperture. Such calculations can be performed using MAD-X, taking as input the beam parameters and the tolerances of, e.g., orbit and optics. Realistic tolerances and an accurate criterion for qualifying the aperture are crucial during the design stage, in order to ensure that all machine apertures are sufficiently protected from harmful beam losses. Previous studies have provided such parameter sets for aperture calculations for both LHC and HL-LHC. This report provides an update to the previous HL-LHC parameters at top energy. First we study the protected aperture that can be tolerated in arbitrary locations without local protection. Furthermore, we investigate also how the protected aperture can be improved using a specially matched phase advance between dump kickers and the tertiary collimators, as was don...
Update to Core reporting practices in structural equation modeling.
Schreiber, James B
2016-07-21
This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling."(1) As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques.
Three Case Studies in Finite Element Model Updating
Directory of Open Access Journals (Sweden)
M. Imregun
1995-01-01
Full Text Available This article summarizes the basic formulation of two well-established finite element model (FEM updating techniques for improved dynamic analysis, namely the response function method (RFM and the inverse eigensensitivity method (IESM. Emphasis is placed on the similarities in their mathematical formulation, numerical treatment, and on the uniqueness of the resulting updated models. Three case studies that include welded L-plate specimens, a car exhaust system, and a highway bridge were examined in some detail and measured vibration data were used throughout the investigation. It was experimentally observed that significant dynamic behavior discrepancies existed between some of the nominally identical structures, a feature that makes the task of model updating even more difficult because no unequivocal reference data exist in this particular case. Although significant improvements were obtained in all cases where the updating of the FE model was possible, it was found that the success of the updated models depended very heavily on the parameters used, such as the selection and number of the frequency points for RFM, and the selection of modes and the balancing of the sensitivity matrix for IESM. Finally, the performance of the two methods was compared from general applicability, numerical stability, and computational effort standpoints.
Velthof, G.L.; Mosquera Losada, J.
2011-01-01
A study was conducted to update the NO2 emission factors for nitrogen (N) fertilizer and animal manures applied to soils, based on results of Dutch experiments, and to derive a country specific methodology to calculate nitrate leachting using a leaching fraction (FracLEACH). It is recommended to use
Model updating in flexible-link multibody systems
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
Passive Remote Sensing of Oceanic Whitecaps: Updated Geophysical Model Function
Anguelova, M. D.; Bettenhausen, M. H.; Johnston, W.; Gaiser, P. W.
2016-12-01
Many air-sea interaction processes are quantified in terms of whitecap fraction W because oceanic whitecaps are the most visible and direct way of observing breaking of wind waves in the open ocean. Enhanced by breaking waves, surface fluxes of momentum, heat, and mass are critical for ocean-atmosphere coupling and thus affect the accuracy of models used to forecast weather, predict storm intensification, and study climate change. Whitecap fraction has been traditionally measured from photographs or video images collected from towers, ships, and aircrafts. Satellite-based passive remote sensing of whitecap fraction is a recent development that allows long term, consistent observations of whitecapping on a global scale. The method relies on changes of ocean surface emissivity at microwave frequencies (e.g., 6 to 37 GHz) due to presence of sea foam on a rough sea surface. These changes at the ocean surface are observed from the satellite as brightness temperature TB. A year-long W database built with this algorithm has proven useful in analyzing and quantifying the variability of W, as well as estimating fluxes of CO2 and sea spray production. The algorithm to obtain W from satellite observations of TB was developed at the Naval Research Laboratory within the framework of WindSat mission. The W(TB) algorithm estimates W by minimizing the differences between measured and modeled TB data. A geophysical model function (GMF) calculates TB at the top of the atmosphere as contributions from the atmosphere and the ocean surface. The ocean surface emissivity combines the emissivity of rough sea surface and the emissivity of areas covered with foam. Wind speed and direction, sea surface temperature, water vapor, and cloud liquid water are inputs to the atmospheric, roughness and foam models comprising the GMF. The W(TB) algorithm has been recently updated to use new sources and products for the input variables. We present new version of the W(TB) algorithm that uses updated
Energy Technology Data Exchange (ETDEWEB)
Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J. (Enviros Spain S.L., Barcelona (ES))
2006-12-15
This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text
PACIAE 2.0: An Updated Parton and Hadron Cascade Model (Program) for Relativistic Nuclear Collisions
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; LI; Xiao-mei; FENG; Sheng-qing; DONG; Bao-guo; CAI; Xu
2012-01-01
<正>We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7, and referred to as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well.
Model update mechanism for mean-shift tracking
Institute of Scientific and Technical Information of China (English)
Peng Ningsong; Yang Jie; Liu Erqi
2005-01-01
In order to solve the model update problem in mean-shift based tracker, a novel mechanism is proposed.Kalman filter is employed to update object model by filtering object kernel-histogram using previous model and current candidate. A self-tuning method is used for adaptively adjust all the parameters of the filters under the analysis of the filtering residuals. In addition, hypothesis testing servers as the criterion for determining whether to accept filtering result. Therefore, the tracker has the ability to handle occlusion so as to avoid over-update. The experimental results show that our method can not only keep up with the object appearance and scale changes but also be robust to occlusion.
Application of firefly algorithm to the dynamic model updating problem
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
A revised calculational model for fission
Energy Technology Data Exchange (ETDEWEB)
Atchison, F.
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
Jang, Jinwoo; Smyth, Andrew W.
2017-01-01
The objective of structural model updating is to reduce inherent modeling errors in Finite Element (FE) models due to simplifications, idealized connections, and uncertainties of material properties. Updated FE models, which have less discrepancies with real structures, give more precise predictions of dynamic behaviors for future analyses. However, model updating becomes more difficult when applied to civil structures with a large number of structural components and complicated connections. In this paper, a full-scale FE model of a major long-span bridge has been updated for improved consistency with real measured data. Two methods are applied to improve the model updating process. The first method focuses on improving the agreement of the updated mode shapes with the measured data. A nonlinear inequality constraint equation is used to an optimization procedure, providing the capability to regulate updated mode shapes to remain within reasonable agreements with those observed. An interior point algorithm deals with nonlinearity in the objective function and constraints. The second method finds very efficient updating parameters in a more systematic way. The selection of updating parameters in FE models is essential to have a successful updating result because the parameters are directly related to the modal properties of dynamic systems. An in-depth sensitivity analysis is carried out in an effort to precisely understand the effects of physical parameters in the FE model on natural frequencies. Based on the sensitivity analysis, cluster analysis is conducted to find a very efficient set of updating parameters.
Directory of Open Access Journals (Sweden)
Hong-jun BAO
2011-03-01
Full Text Available A real-time channel flood forecast model was developed to simulate channel flow in plain rivers based on the dynamic wave theory. Taking into consideration channel shape differences along the channel, a roughness updating technique was developed using the Kalman filter method to update Manning’s roughness coefficient at each time step of the calculation processes. Channel shapes were simplified as rectangles, triangles, and parabolas, and the relationships between hydraulic radius and water depth were developed for plain rivers. Based on the relationship between the Froude number and the inertia terms of the momentum equation in the Saint-Venant equations, the relationship between Manning’s roughness coefficient and water depth was obtained. Using the channel of the Huaihe River from Wangjiaba to Lutaizi stations as a case, to test the performance and rationality of the present flood routing model, the original hydraulic model was compared with the developed model. Results show that the stage hydrographs calculated by the developed flood routing model with the updated Manning’s roughness coefficient have a good agreement with the observed stage hydrographs. This model performs better than the original hydraulic model.
Numerical modelling of mine workings: annual update 1999/2000.
CSIR Research Space (South Africa)
Lightfoot, N
1999-09-01
Full Text Available The SIMRAC project GAP629 has two aspects. Firstly, the production of an updated edition of the guidebook Numerical Modelling of Mine Workings. The original document was launched to the South African mining industry in April 1999. Secondly...
Crushed-salt constitutive model update
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)
1998-01-01
Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.
Neurodevelopmental model of schizophrenia: update 2012
National Research Council Canada - National Science Library
Rapoport, J L; Giedd, J N; Gogtay, N
2012-01-01
The neurodevelopmental model of schizophrenia, which posits that the illness is the end state of abnormal neurodevelopmental processes that started years before the illness onset, is widely accepted...
Precipitates/Salts Model Sensitivity Calculation
Energy Technology Data Exchange (ETDEWEB)
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
A last updating evolution model for online social networks
Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui
2013-05-01
As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
An update of Leighton's solar dynamo model
Cameron, R H
2016-01-01
In 1969 Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock(1961). Here we present a modification and extension of Leighton's model. Using the axisymmetric component of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of turbulent diffusion at the surface and in the convection zone, poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux, latitudinal differential rotation and the near-surface layer of radial rotational shear, downward convective pumping of magnetic flux in the shear layer, and flux emergence in the form of tilted bipolar magnetic regions. While the parameters relevant for the transport of the surface field are taken from observations,...
The Lunar Mapping and Modeling Project Update
Noble, S.; French, R.; Nall, M.; Muery, K.
2010-01-01
The Lunar Mapping and Modeling Project (LMMP) is managing the development of a suite of lunar mapping and modeling tools and data products that support lunar exploration activities, including the planning, design, development, test, and operations associated with crewed and/or robotic operations on the lunar surface. In addition, LMMP should prove to be a convenient and useful tool for scientific analysis and for education and public outreach (E/PO) activities. LMMP will utilize data predominately from the Lunar Reconnaissance Orbiter, but also historical and international lunar mission data (e.g. Lunar Prospector, Clementine, Apollo, Lunar Orbiter, Kaguya, and Chandrayaan-1) as available and appropriate. LMMP will provide such products as image mosaics, DEMs, hazard assessment maps, temperature maps, lighting maps and models, gravity models, and resource maps. We are working closely with the LRO team to prevent duplication of efforts and ensure the highest quality data products. A beta version of the LMMP software was released for limited distribution in December 2009, with the public release of version 1 expected in the Fall of 2010.
The International Reference Ionosphere: Model Update 2016
Bilitza, Dieter; Altadill, David; Reinisch, Bodo; Galkin, Ivan; Shubin, Valentin; Truhlik, Vladimir
2016-04-01
The International Reference Ionosphere (IRI) is recognized as the official standard for the ionosphere (COSPAR, URSI, ISO) and is widely used for a multitude of different applications as evidenced by the many papers in science and engineering journals that acknowledge the use of IRI (e.g., about 11% of all Radio Science papers each year). One of the shortcomings of the model has been the dependence of the F2 peak height modeling on the propagation factor M(3000)F2. With the 2016 version of IRI, two new models will be introduced for hmF2 that were developed directly based on hmF2 measurements by ionosondes [Altadill et al., 2013] and by COSMIC radio occultation [Shubin, 2015], respectively. In addition IRI-2016 will include an improved representation of the ionosphere during the very low solar activities that were reached during the last solar minimum in 2008/2009. This presentation will review these and other improvements that are being implemented with the 2016 version of the IRI model. We will also discuss recent IRI workshops and their findings and results. One of the most exciting new projects is the development of the Real-Time IRI [Galkin et al., 2012]. We will discuss the current status and plans for the future. Altadill, D., S. Magdaleno, J.M. Torta, E. Blanch (2013), Global empirical models of the density peak height and of the equivalent scale height for quiet conditions, Advances in Space Research 52, 1756-1769, doi:10.1016/j.asr.2012.11.018. Galkin, I.A., B.W. Reinisch, X. Huang, and D. Bilitza (2012), Assimilation of GIRO Data into a Real-Time IRI, Radio Science, 47, RS0L07, doi:10.1029/2011RS004952. Shubin V.N. (2015), Global median model of the F2-layer peak height based on ionospheric radio-occultation and ground-based Digisonde observations, Advances in Space Research 56, 916-928, doi:10.1016/j.asr.2015.05.029.
An update of Leighton's solar dynamo model
Cameron, R. H.; Schüssler, M.
2017-02-01
In 1969, Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock published in 1961. Here we present a modification and extension of Leighton's model. Using the axisymmetric component (longitudinal average) of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of (i) turbulent diffusion at the surface and in the convection zone; (ii) poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux; (iii) latitudinal differential rotation and the near-surface layer of radial rotational shear; (iv) downward convective pumping of magnetic flux in the shear layer; and (v) flux emergence in the form of tilted bipolar magnetic regions treated as a source term for the radial surface field. While the parameters relevant for the transport of the surface field are taken from observations, the model condenses the unknown properties of magnetic field and flow in the convection zone into a few free parameters (turbulent diffusivity, effective return flow, amplitude of the source term, and a parameter describing the effective radial shear). Comparison with the results of 2D flux transport dynamo codes shows that the model captures the essential features of these simulations. We make use of the computational efficiency of the model to carry out an extended parameter study. We cover an extended domain of the 4D parameter space and identify the parameter ranges that provide solar-like solutions. Dipole parity is always preferred and solutions with periods around 22 yr and a correct phase difference between flux emergence in low latitudes and the strength of the polar fields are found for a return flow speed around 2 m s-1, turbulent
Substructure System Identification for Finite Element Model Updating
Craig, Roy R., Jr.; Blades, Eric L.
1997-01-01
This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.
Update on Advection-Diffusion Purge Flow Model
Brieda, Lubos
2015-01-01
Gaseous purge is commonly used in sensitive spacecraft optical or electronic instruments to prevent infiltration of contaminants and/or water vapor. Typically, purge is sized using simplistic zero-dimensional models that do not take into account instrument geometry, surface effects, and the dependence of diffusive flux on the concentration gradient. For this reason, an axisymmetric computational fluid dynamics (CFD) simulation was recently developed to model contaminant infiltration and removal by purge. The solver uses a combined Navier-Stokes and Advection-Diffusion approach. In this talk, we report on updates in the model, namely inclusion of a particulate transport model.
Maximum likelihood reconstruction for Ising models with asynchronous updates
Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser
2012-01-01
We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...
An updated geospatial liquefaction model for global application
Zhu, Jing; Baise, Laurie G.; Thompson, Eric
2017-01-01
We present an updated geospatial approach to estimation of earthquake-induced liquefaction from globally available geospatial proxies. Our previous iteration of the geospatial liquefaction model was based on mapped liquefaction surface effects from four earthquakes in Christchurch, New Zealand, and Kobe, Japan, paired with geospatial explanatory variables including slope-derived VS30, compound topographic index, and magnitude-adjusted peak ground acceleration from ShakeMap. The updated geospatial liquefaction model presented herein improves the performance and the generality of the model. The updates include (1) expanding the liquefaction database to 27 earthquake events across 6 countries, (2) addressing the sampling of nonliquefaction for incomplete liquefaction inventories, (3) testing interaction effects between explanatory variables, and (4) overall improving model performance. While we test 14 geospatial proxies for soil density and soil saturation, the most promising geospatial parameters are slope-derived VS30, modeled water table depth, distance to coast, distance to river, distance to closest water body, and precipitation. We found that peak ground velocity (PGV) performs better than peak ground acceleration (PGA) as the shaking intensity parameter. We present two models which offer improved performance over prior models. We evaluate model performance using the area under the curve under the Receiver Operating Characteristic (ROC) curve (AUC) and the Brier score. The best-performing model in a coastal setting uses distance to coast but is problematic for regions away from the coast. The second best model, using PGV, VS30, water table depth, distance to closest water body, and precipitation, performs better in noncoastal regions and thus is the model we recommend for global implementation.
Updated scalar sector constraints in Higgs triplet model
Das, Dipankar
2016-01-01
We show that in the Higgs triplet model, after the Higgs discovery, the mixing angle in the CP-even sector can be strongly constrained from unitarity. We also discuss how large quantum effects in $h\\to\\gamma\\gamma$ may arise in a SM-like scenario and a certain part of the parameter space can be ruled out from the diphoton signal strength. Using $T$-parameter and diphoton signal strength measurements, we update the bounds on the nonstandard scalar masses.
Configuration mixing calculations in soluble models
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
An improved optimal elemental method for updating finite element models
Institute of Scientific and Technical Information of China (English)
Duan Zhongdong(段忠东); Spencer B.F.; Yan Guirong(闫桂荣); Ou Jinping(欧进萍)
2004-01-01
The optimal matrix method and optimal elemental method used to update finite element models may not provide accurate results. This situation occurs when the test modal model is incomplete, as is often the case in practice. An improved optimal elemental method is presented that defines a new objective function, and as a byproduct, circumvents the need for mass normalized modal shapes, which are also not readily available in practice. To solve the group of nonlinear equations created by the improved optimal method, the Lagrange multiplier method and Matlab function fmincon are employed. To deal with actual complex structures,the float-encoding genetic algorithm (FGA) is introduced to enhance the capability of the improved method. Two examples, a 7-degree of freedom (DOF) mass-spring system and a 53-DOF planar frame, respectively, are updated using the improved method.Thc example results demonstrate the advantages of the improved method over existing optimal methods, and show that the genetic algorithm is an effective way to update the models used for actual complex structures.
Finite element model updating for large span spatial steel structure considering uncertainties
Institute of Scientific and Technical Information of China (English)
TENG Jun; ZHU Yan-huang; ZHOU Feng; LI Hui; OU Jin-ping
2010-01-01
In order to establish the baseline finite element model for structural health monitoring,a new method of model updating was proposed after analyzing the uncertainties of measured data and the error of finite element model.In the new method,the finite element model was replaced by the multi-output support vector regression machine(MSVR).The interval variables of the measured frequency were sampled by Latin hypercube sampling method.The samples of frequency were regarded as the inputs of the trained MSVR.The outputs of MSVR were the target values of design parameters.The steel structure of National Aquatic Center for Beijing Olympic Games was introduced as a case for finite element model updating.The results show that the proposed method can avoid solving the problem of complicated calculation.Both the estimated values and associated uncertainties of the structure parameters can be obtained by the method.The static and dynamic characteristics of the updated finite element model are in good agreement with the measured data.
An updated subgrid orographic parameterization for global atmospheric forecast models
Choi, Hyun-Joo; Hong, Song-You
2015-12-01
A subgrid orographic parameterization (SOP) is updated by including the effects of orographic anisotropy and flow-blocking drag (FBD). The impact of the updated SOP on short-range forecasts is investigated using a global atmospheric forecast model applied to a heavy snowfall event over Korea on 4 January 2010. When the SOP is updated, the orographic drag in the lower troposphere noticeably increases owing to the additional FBD over mountainous regions. The enhanced drag directly weakens the excessive wind speed in the low troposphere and indirectly improves the temperature and mass fields over East Asia. In addition, the snowfall overestimation over Korea is improved by the reduced heat fluxes from the surface. The forecast improvements are robust regardless of the horizontal resolution of the model between T126 and T510. The parameterization is statistically evaluated based on the skill of the medium-range forecasts for February 2014. For the medium-range forecasts, the skill improvements of the wind speed and temperature in the low troposphere are observed globally and for East Asia while both positive and negative effects appear indirectly in the middle-upper troposphere. The statistical skill for the precipitation is mostly improved due to the improvements in the synoptic fields. The improvements are also found for seasonal simulation throughout the troposphere and stratosphere during boreal winter.
Two dimensional cellular automaton for evacuation modeling: hybrid shuffle update
Arita, Chikashi; Appert-Rolland, Cécile
2015-01-01
We consider a cellular automaton model with a static floor field for pedestrians evacuating a room. After identifying some properties of real pedestrian flows, we discuss various update schemes, and we introduce a new one, the hybrid shuffle update. The properties specific to pedestrians are incorporated in variables associated to particles called phases, that represent their step cycles. The dynamics of the phases gives naturally raise to some friction, and allows to reproduce several features observed in experiments. We study in particular the crossover between a low- and a high-density regime that occurs when the density of pedestrian increases, the dependency of the outflow in the strength of the floor field, and the shape of the queue in front of the exit.
An updating method for structural dynamics models with unknown excitations
Energy Technology Data Exchange (ETDEWEB)
Louf, F; Charbonnel, P E; Ladeveze, P [LMT-Cachan (ENS Cachan/CNRS/Paris 6 University) 61, avenue du Prsident Wilson, F-94235 Cachan Cedex (France); Gratien, C [Astrium (EADS space transportation) - Service TE 343 66, Route de Verneuil, 78133 Les Mureaux Cedex (France)], E-mail: charbonnel@lmt.ens-cachan.fr, E-mail: ladeveze@lmt.ens-cachan.fr, E-mail: louf@lmt.ens-cachan.fr, E-mail: christian.gratien@astrium.eads.net
2008-11-01
This paper presents an extension of the Constitutive Relation Error (CRE) updating method to complex industrial structures, such as space launchers, for which tests carried out in the functional context can provide significant amounts of information. Indeed, since several sources of excitation are involved simultaneously, a flight test can be viewed as a multiple test. However, there is a serious difficulty in that these sources of excitation are partially unknown. The CRE updating method enables one to obtain an estimate of these excitations. We present a first application of the method using a very simple finite element model of the Ariane V launcher along with measurements performed at the end of an atmospheric flight.
Matrix model calculations beyond the spherical limit
Energy Technology Data Exchange (ETDEWEB)
Ambjoern, J. (Niels Bohr Institute, Copenhagen (Denmark)); Chekhov, L. (L.P.T.H.E., Universite Pierre et Marie Curie, 75 - Paris (France)); Kristjansen, C.F. (Niels Bohr Institute, Copenhagen (Denmark)); Makeenko, Yu. (Institute of Theoretical and Experimental Physics, Moscow (Russian Federation))
1993-08-30
We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)
Cost Calculation Model for Logistics Service Providers
Directory of Open Access Journals (Sweden)
Zoltán Bokor
2012-11-01
Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly
Institute of Scientific and Technical Information of China (English)
Yang Liu; DeJun Wang; Jun Ma; Yang Li
2014-01-01
To investigate the application of meta-model for finite element ( FE) model updating of structures, the performance of two popular meta-model, i.e., Kriging model and response surface model (RSM), were compared in detail. Firstly, above two kinds of meta-model were introduced briefly. Secondly, some key issues of the application of meta-model to FE model updating of structures were proposed and discussed, and then some advices were presented in order to select a reasonable meta-model for the purpose of updating the FE model of structures. Finally, the procedure of FE model updating based on meta-model was implemented by updating the FE model of a truss bridge model with the measured modal parameters. The results showed that the Kriging model was more proper for FE model updating of complex structures.
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
An Updated Analytical Structural Pounding Force Model Based on Viscoelasticity of Materials
Directory of Open Access Journals (Sweden)
Qichao Xue
2016-01-01
Full Text Available Based on the summary of existing pounding force analytical models, an updated pounding force analysis method is proposed by introducing viscoelastic constitutive model and contact mechanics method. Traditional Kelvin viscoelastic pounding force model can be expanded to 3-parameter linear viscoelastic model by separating classic pounding model parameters into geometry parameters and viscoelastic material parameters. Two existing pounding examples, the poundings of steel-to-steel and concrete-to-concrete, are recalculated by utilizing the proposed method. Afterwards, the calculation results are compared with other pounding force models. The results show certain accuracy in proposed model. The relative normalized errors of steel-to-steel and concrete-to-concrete experiments are 19.8% and 12.5%, respectively. Furthermore, a steel-to-polymer pounding example is calculated, and the application of the proposed method in vibration control analysis for pounding tuned mass damper (TMD is simulated consequently. However, due to insufficient experiment details, the proposed model can only give a rough trend for both single pounding process and vibration control process. Regardless of the cheerful prospect, the study in this paper is only the first step of pounding force calculation. It still needs a more careful assessment of the model performance, especially in the presence of inelastic response.
An Updated Gas/grain Sulfur Network for Astrochemical Models
Laas, Jacob; Caselli, Paola
2017-06-01
Sulfur is a chemical element that enjoys one of the highest cosmic abundances. However, it has traditionally played a relatively minor role in the field of astrochemistry, being drowned out by other chemistries after it depletes from the gas phase during the transition from a diffuse cloud to a dense one. A wealth of laboratory studies have provided clues to its rich chemistry in the condensed phase, and most recently, a report by a team behind the Rosetta spacecraft has significantly helped to unveil its rich cometary chemistry. We have set forth to use this information to greatly update/extend the sulfur reactions within the OSU gas/grain astrochemical network in a systematic way, to provide more realistic chemical models of sulfur for a variety of interstellar environments. We present here some results and implications of these models.
Time-reversed particle dynamics calculation with field line tracing at Titan - an update
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly; Juhasz, Antal; Lukacs, Katalin
2014-05-01
We use CAPS-IMS Singles data of Cassini measured between 2004 and 2010 to investigate the pickup process and dynamics of ions originating from Titan's atmosphere. A 4th order Runge-Kutta method was applied to calculate the test particle trajectories in a time reversed scenario, in the curved magnetic environment. We evaluated the minimum variance directions along the S/C trajectory for all Cassini flybys during which the CAPS instrument was in operation, and assumed that the field was homogeneous perpendicular to the minimum variance direction. We calculated the magnetic field lines with this method along the flyby orbits and we could determine those observational intervals when Cassini and the upper atmosphere of Titan could be magnetically connected. We used three ion species (1, 2 and 16 amu ions) for time reversed tracking, and also considered the categorization of Rymer et al. (2009) and Nemeth et al. (2011) for further features studies.
Update on a short-distance $D^0$-meson mixing calculation with $N_f=2+1$ flavors
Energy Technology Data Exchange (ETDEWEB)
Chang, Chia Cheng [Fermilab; Bernard, Claude [Washington U., St. Louis; Bouchard, Chris [Ohio State U.; El-Khadra, A. X. [Fermilab; Freeland, Elizabeth [Art Inst. of Chicago; Gámiz, Elvira [Granada U., Theor. Phys. Astrophys.; Kronfeld, Andreas S. [TUM-IAS, Munich; Laiho, Jack [Syracuse U.; Van de Water, Ruth S. [Fermilab
2014-11-22
We present an update on our calculation of the short-distance $D^0$-meson mixing hadronic matrix elements. The analysis is performed on the MILC collaboration's $N_f=2+1$ asqtad configurations. We use asqtad light valence quarks and the Sheikoleslami-Wohlert action with the Fermilab interpretation for the valence charm quark. SU(3), partially quenched, rooted, staggered heavy-meson chiral perturbation theory is used to extrapolate to the chiral-continuum limit. Systematic errors arising from the chiral-continuum extrapolation, heavy-quark discretization, and quark-mass uncertainties are folded into the statistical errors from the chiral-continuum fits with methods of Bayesian inference. A preliminary error budget for all five operators is presented.
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
Updated Delft Mass Transport model DMT-2: computation and validation
Hashemi Farahani, Hassan; Ditmar, Pavel; Inacio, Pedro; Klees, Roland; Guo, Jing; Guo, Xiang; Liu, Xianglin; Zhao, Qile; Didova, Olga; Ran, Jiangjun; Sun, Yu; Tangdamrongsub, Natthachet; Gunter, Brian; Riva, Ricardo; Steele-Dunne, Susan
2014-05-01
A number of research centers compute models of mass transport in the Earth's system using primarily K-Band Ranging (KBR) data from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. These models typically consist of a time series of monthly solutions, each of which is defined in terms of a set of spherical harmonic coefficients up to degree 60-120. One of such models, the Delft Mass Transport, release 2 (DMT-2), is computed at the Delft University of Technology (The Netherlands) in collaboration with Wuhan University. An updated variant of this model has been produced recently. A unique feature of the computational scheme designed to compute DMT-2 is the preparation of an accurate stochastic description of data noise in the frequency domain using an Auto-Regressive Moving-Average (ARMA) model, which is derived for each particular month. The benefits of such an approach are a proper frequency-dependent data weighting in the data inversion and an accurate variance-covariance matrix of noise in the estimated spherical harmonic coefficients. Furthermore, the data prior to the inversion are subject to an advanced high-pass filtering, which makes use of a spatially-dependent weighting scheme, so that noise is primarily estimated on the basis of data collected over areas with minor mass transport signals (e.g., oceans). On the one hand, this procedure efficiently suppresses noise, which are caused by inaccuracies in satellite orbits and, on the other hand, preserves mass transport signals in the data. Finally, the unconstrained monthly solutions are filtered using a Wiener filter, which is based on estimates of the signal and noise variance-covariance matrices. In combination with a proper data weighting, this noticeably improves the spatial resolution of the monthly gravity models and the associated mass transport models.. For instance, the computed solutions allow long-term negative trends to be clearly seen in sufficiently small regions notorious
Update on microkinetic modeling of lean NOx trap chemistry.
Energy Technology Data Exchange (ETDEWEB)
Larson, Richard S.; Daw, C. Stuart (Oak Ridge National Laboratory, Oak Ridge, TN); Pihl, Josh A. (Oak Ridge National Laboratory, Oak Ridge, TN); Choi, Jae-Soon (Oak Ridge National Laboratory, Oak Ridge, TN); Chakravarthy, V, Kalyana (Oak Ridge National Laboratory, Oak Ridge, TN)
2010-04-01
Our previously developed microkinetic model for lean NOx trap (LNT) storage and regeneration has been updated to address some longstanding issues, in particular the formation of N2O during the regeneration phase at low temperatures. To this finalized mechanism has been added a relatively simple (12-step) scheme that accounts semi-quantitatively for the main features observed during sulfation and desulfation experiments, namely (a) the essentially complete trapping of SO2 at normal LNT operating temperatures, (b) the plug-like sulfation of both barium oxide (NOx storage) and cerium oxide (oxygen storage) sites, (c) the degradation of NOx storage behavior arising from sulfation, (d) the evolution of H2S and SO2 during high temperature desulfation (temperature programmed reduction) under H2, and (e) the complete restoration of NOx storage capacity achievable through the chosen desulfation procedure.
Simple brane-world inflationary models — An update
Okada, Nobuchika; Okada, Satomi
2016-05-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index (ns) and the tensor-to-scalar ratio (r) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass (M5). On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect with a suitable choice of M5.
Simple brane-world inflationary models: an update
Okada, Nobuchika
2015-01-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index ($n_s$) and the tensor-to-scalar ratio ($r$) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass. On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect.
Contact-based model for strategy updating and evolution of cooperation
Zhang, Jianlei; Chen, Zengqiang
2016-06-01
To establish an available model for the astoundingly strategy decision process of players is not easy, sparking heated debate about the related strategy updating rules is intriguing. Models for evolutionary games have traditionally assumed that players imitate their successful partners by the comparison of respective payoffs, raising the question of what happens if the game information is not easily available. Focusing on this yet-unsolved case, the motivation behind the work presented here is to establish a novel model for the updating of states in a spatial population, by detouring the required payoffs in previous models and considering much more players' contact patterns. It can be handy and understandable to employ switching probabilities for determining the microscopic dynamics of strategy evolution. Our results illuminate the conditions under which the steady coexistence of competing strategies is possible. These findings reveal that the evolutionary fate of the coexisting strategies can be calculated analytically, and provide novel hints for the resolution of cooperative dilemmas in a competitive context. We hope that our results have disclosed new explanations about the survival and coexistence of competing strategies in structured populations.
Online calculators for geomagnetic models at the National Geophysical Data Center
Ford, J. P.; Nair, M.; Maus, S.; McLean, S. J.
2009-12-01
NOAA’s National Geophysical Data Center at Boulder provides online calculators for geomagnetic field models. These models provide current and past values of the geomagnetic field on regional and global spatial scales. These calculators are popular among scientists, engineers and the general public across the world as a resource to compute geomagnetic field elements. We regularly update both the web interfaces and the underlying geomagnetic models. We have four different calculators to compute geomagnetic fields for different user applications. The declination calculators optionally use our World Magnetic Model (WMM) or the International Geomagnetic Reference Field (IGRF) to provide geomagnetic declination as well as its annual rate of change for the chosen location. All seven magnetic field components for a single day or for a range of years from 1900-present can obtained using our Magnetic Field Calculator IGRFWMM. Users can also compute magnetic field values (current and past) over an area using the IGRFGrid calculator. The USHistoric calculator uses a US declination model to compute the declination for the conterminous US from 1750 - present (data permitting). All calculators allow the user to enter the location either as a Zip Code or by specifying the geographic latitude and longitude.
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.
Directory of Open Access Journals (Sweden)
Greg Jensen
Full Text Available Transitive inference (the ability to infer that B > D given that B > C and C > D is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1 representing stimulus positions along a unit span using beta distributions, (2 treating positive and negative feedback asymmetrically, and (3 updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models and its computational efficiency (when compared to full Markov decision process implementations suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Update rules and interevent time distributions: Slow ordering versus no ordering in the voter model
Fernández-Gracia, J.; Eguíluz, V. M.; San Miguel, M.
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
An Update to the NASA Reference Solar Sail Thrust Model
Heaton, Andrew F.; Artusio-Glimpse, Alexandra B.
2015-01-01
An optical model of solar sail material originally derived at JPL in 1978 has since served as the de facto standard for NASA and other solar sail researchers. The optical model includes terms for specular and diffuse reflection, thermal emission, and non-Lambertian diffuse reflection. The standard coefficients for these terms are based on tests of 2.5 micrometer Kapton sail material coated with 100 nm of aluminum on the front side and chromium on the back side. The original derivation of these coefficients was documented in an internal JPL technical memorandum that is no longer available. Additionally more recent optical testing has taken place and different materials have been used or are under consideration by various researchers for solar sails. Here, where possible, we re-derive the optical coefficients from the 1978 model and update them to accommodate newer test results and sail material. The source of the commonly used value for the front side non-Lambertian coefficient is not clear, so we investigate that coefficient in detail. Although this research is primarily designed to support the upcoming NASA NEA Scout and Lunar Flashlight solar sail missions, the results are also of interest to the wider solar sail community.
Real time hybrid simulation with online model updating: An analysis of accuracy
Ou, Ge; Dyke, Shirley J.; Prakash, Arun
2017-02-01
In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
Updated Conceptual Model for the 300 Area Uranium Groundwater Plume
Energy Technology Data Exchange (ETDEWEB)
Zachara, John M.; Freshley, Mark D.; Last, George V.; Peterson, Robert E.; Bjornstad, Bruce N.
2012-11-01
The 300 Area uranium groundwater plume in the 300-FF-5 Operable Unit is residual from past discharge of nuclear fuel fabrication wastes to a number of liquid (and solid) disposal sites. The source zones in the disposal sites were remediated by excavation and backfilled to grade, but sorbed uranium remains in deeper, unexcavated vadose zone sediments. In spite of source term removal, the groundwater plume has shown remarkable persistence, with concentrations exceeding the drinking water standard over an area of approximately 1 km2. The plume resides within a coupled vadose zone, groundwater, river zone system of immense complexity and scale. Interactions between geologic structure, the hydrologic system driven by the Columbia River, groundwater-river exchange points, and the geochemistry of uranium contribute to persistence of the plume. The U.S. Department of Energy (DOE) recently completed a Remedial Investigation/Feasibility Study (RI/FS) to document characterization of the 300 Area uranium plume and plan for beginning to implement proposed remedial actions. As part of the RI/FS document, a conceptual model was developed that integrates knowledge of the hydrogeologic and geochemical properties of the 300 Area and controlling processes to yield an understanding of how the system behaves and the variables that control it. Recent results from the Hanford Integrated Field Research Challenge site and the Subsurface Biogeochemistry Scientific Focus Area Project funded by the DOE Office of Science were used to update the conceptual model and provide an assessment of key factors controlling plume persistence.
Network inference using asynchronously updated kinetic Ising Model
Zeng, Hong-Li; Alava, Mikko; Mahmoudi, Hamed
2010-01-01
Network structures are reconstructed from dynamical data by respectively naive mean field (nMF) and Thouless-Anderson-Palmer (TAP) approximations. For TAP approximation, we use two methods to reconstruct the network: a) iteration method; b) casting the inference formula to a set of cubic equations and solving it directly. We investigate inference of the asymmetric Sherrington- Kirkpatrick (S-K) model using asynchronous update. The solutions of the sets cubic equation depend of temperature T in the S-K model, and a critical temperature Tc is found around 2.1. For T Tc there are three real roots. The iteration method is convergent only if the cubic equations have three real solutions. The two methods give same results when the iteration method is convergent. Compared to nMF, TAP is somewhat better at low temperatures, but approaches the same performance as temperature increase. Both methods behave better for longer data length, but for improvement arises, TAP is well pronounced.
Foothills model forest grizzly bear study : project update
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
This report updates a five year study launched in 1999 to ensure the continued healthy existence of grizzly bears in west-central Alberta by integrating their needs into land management decisions. The objective was to gather better information and to develop computer-based maps and models regarding grizzly bear migration, habitat use and response to human activities. The study area covers 9,700 square km in west-central Alberta where 66 to 147 grizzly bears exist. During the first 3 field seasons, researchers captured and radio collared 60 bears. Researchers at the University of Calgary used remote sensing tools and satellite images to develop grizzly bear habitat maps. Collaborators at the University of Washington used trained dogs to find bear scat which was analyzed for DNA, stress levels and reproductive hormones. Resource Selection Function models are being developed by researchers at the University of Alberta to identify bear locations and to see how habitat is influenced by vegetation cover and oil, gas, forestry and mining activities. The health of the bears is being studied by researchers at the University of Saskatchewan and the Canadian Cooperative Wildlife Health Centre. The study has already advanced the scientific knowledge of grizzly bear behaviour. Preliminary results indicate that grizzlies continue to find mates, reproduce and gain weight and establish dens. These are all good indicators of a healthy population. Most bear deaths have been related to poaching. The study will continue for another two years. 1 fig.
An Updating Method for Structural Dynamics Models with Uncertainties
Directory of Open Access Journals (Sweden)
B. Faverjon
2008-01-01
Full Text Available One challenge in the numerical simulation of industrial structures is model validation based on experimental data. Among the indirect or parametric methods available, one is based on the “mechanical” concept of constitutive relation error estimator introduced in order to quantify the quality of finite element analyses. In the case of uncertain measurements obtained from a family of quasi-identical structures, parameters need to be modeled randomly. In this paper, we consider the case of a damped structure modeled with stochastic variables. Polynomial chaos expansion and reduced bases are used to solve the stochastic problems involved in the calculation of the error.
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
in a four-stage transport model related to different variable distributions (to be used in a Monte Carlo simulation procedure), assignment procedures and levels of congestion, at both the link and the network level. The analysis used as case study the Næstved model, referring to the Danish town of Næstved2...... the uncertainty propagation pattern over time specific for key model outputs becomes strategically important. 1 Manzo, S., Nielsen, O. A. & Prato, C. G. (2014). The Effects of uncertainty in speed-flow curve parameters on a large-scale model. Transportation Research Record, 1, 30-37. 2 Manzo, S., Nielsen, O. A...
Marwala, Tshilidzi
2010-01-01
Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...
Development of a cyber-physical experimental platform for real-time dynamic model updating
Song, Wei; Dyke, Shirley
2013-05-01
Model updating procedures are traditionally performed off-line. With the significant recent advances in embedded systems and the related real-time computing capabilities, online or real-time, model updating can be performed to inform decision making and controller actions. The applications for real-time model updating are mainly in the areas of (i) condition diagnosis and prognosis of engineering systems; and (ii) control systems that benefit from accurate modeling of the system plant. Herein, the development of a cyber-physical real-time model updating experimental platform, including real-time computing environment, model updating algorithm, hardware architecture and testbed, is described. Results from two challenging experimental implementations are presented to illustrate the performance of this cyber-physical platform in achieving the goal of updating nonlinear systems in real-time. The experiments consider typical nonlinear engineering systems that exhibit hysteresis. Among the available algorithms capable of identification of such complex nonlinearities, the unscented Kalman filter (UKF) is selected for these experiments as an effective method to update nonlinear dynamic system models under realistic conditions. The implementation of the platform is discussed for successful completion of these experiments, including required timing constraints and overall evaluation of the system.
Quantum biological channel modeling and capacity calculation.
Djordjevic, Ivan B
2012-12-10
Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.
Quantum Biological Channel Modeling and Capacity Calculation
Directory of Open Access Journals (Sweden)
Ivan B. Djordjevic
2012-12-01
Full Text Available Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors, and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii replication errors introduced during DNA replication process, (iii transcription errors introduced during DNA to mRNA transcription, and (iv translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.
Milky Way Mass Models for Orbit Calculations
Irrgang, Andreas; Tucker, Evan; Schiefelbein, Lucas
2013-01-01
Studying the trajectories of objects like stars, globular clusters or satellite galaxies in the Milky Way allows to trace the dark matter halo but requires reliable models of its gravitational potential. Realistic, yet simple and fully analytical models have already been presented in the past. However, improved as well as new observational constraints have become available in the meantime calling for a recalibration of the respective model parameters. Three widely used model potentials are revisited. By a simultaneous least-squared fit to the observed rotation curve, in-plane proper motion of Sgr A*, local mass/surface density and the velocity dispersion in Baade's window, parameters of the potentials are brought up-to-date. The mass at large radii - and thus in particular that of the dark matter halo - is hereby constrained by imposing that the most extreme halo blue horizontal-branch star known has to be bound to the Milky Way. The Galactic mass models are tuned to yield a very good match to recent observat...
Renormalization-group calculation of excitation properties for impurity models
Yoshida, M.; Whitaker, M. A.; Oliveira, L. N.
1990-05-01
The renormalization-group method developed by Wilson to calculate thermodynamical properties of dilute magnetic alloys is generalized to allow the calculation of dynamical properties of many-body impurity Hamiltonians. As a simple illustration, the impurity spectral density for the resonant-level model (i.e., the U=0 Anderson model) is computed. As a second illustration, for the same model, the longitudinal relaxation rate for a nuclear spin coupled to the impurity is calculated as a function of temperature.
Experimental Studies on Finite Element Model Updating for a Heated Beam-Like Structure
Directory of Open Access Journals (Sweden)
Kaipeng Sun
2015-01-01
Full Text Available An experimental study was made for the identification procedure of time-varying modal parameters and the finite element model updating technique of a beam-like thermal structure in both steady and unsteady high temperature environments. An improved time-varying autoregressive method was proposed first to extract the instantaneous natural frequencies of the structure in the unsteady high temperature environment. Based on the identified modal parameters, then, a finite element model for the structure was updated by using Kriging meta-model and optimization-based finite-element model updating method. The temperature-dependent parameters to be updated were expressed as low-order polynomials of temperature increase, and the finite element model updating problem was solved by updating several coefficients of the polynomials. The experimental results demonstrated the effectiveness of the time-varying modal parameter identification method and showed that the instantaneous natural frequencies of the updated model well tracked the trends of the measured values with high accuracy.
Variance analysis for model updating with a finite element based subspace fitting approach
Gautier, Guillaume; Mevel, Laurent; Mencik, Jean-Mathieu; Serra, Roger; Döhler, Michael
2017-07-01
Recently, a subspace fitting approach has been proposed for vibration-based finite element model updating. The approach makes use of subspace-based system identification, where the extended observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating the model-based observability matrix with the estimated one, by using a single set of experimental data. Hence, the updated finite element model only reflects this single test case. However, estimates from vibration measurements are inherently exposed to uncertainty due to unknown excitation, measurement noise and finite data length. In this paper, a covariance estimation procedure for the updated model parameters is proposed, which propagates the data-related covariance to the updated model parameters by considering a first-order sensitivity analysis. In particular, this propagation is performed through each iteration step of the updating minimization problem, by taking into account the covariance between the updated parameters and the data-related quantities. Simulated vibration signals are used to demonstrate the accuracy and practicability of the derived expressions. Furthermore, an application is shown on experimental data of a beam.
Update rules and interevent time distributions: Slow ordering vs. no ordering in the Voter Model
Fernández-Gracia, Juan; Miguel, M San
2011-01-01
We introduce a general methodology of update rules accounting for arbitrary interevent time distributions in simulations of interacting agents. In particular we consider update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully-connected, random and scale free networks with an update probability inversely proportional to the persistence, that is, the time since the last event. We find that in the thermodynamic limit, at variance with standard updates, the system orders slowly. The approach to the absorbing state is characterized by a power law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined.
Spread and Quote-Update Frequency of the Limit-Order Driven Sergei Maslov Model
Institute of Scientific and Technical Information of China (English)
QIU Tian; CHEN Guang
2007-01-01
@@ We perform numerical simulations of the limit-order driven Sergei Maslov (SM) model and investigate the probability distribution and autocorrelation function of the bid-ask spread S and the quote-update frequency U.For the probability distribution, the model successfully reproduces the power law decay of the spread and the exponential decay of the quote-update frequency. For the autocorrelation function, both the spread and the quote-update frequency of the model decay by a power law, which is consistent with the empirical study. We obtain the power law exponent 0.54 for the spread, which is in good agreement with the real financial market.
Detailed opacity calculations for stellar models
Pain, Jean-Christophe; Gilleron, Franck
2016-10-01
We present a state of the art of precise spectral opacity calculations illustrated by stellar applications. The essential role of laboratory experiments to check the quality of the computed data is underlined. We review some X-ray and XUV laser and Z-pinch photo-absorption measurements as well as X-ray emission spectroscopy experiments of hot dense plasmas produced by ultra-high-intensity laser interaction. The measured spectra are systematically compared with the fine-structure opacity code SCO-RCG. Focus is put on iron, due to its crucial role in the understanding of asteroseismic observations of Beta Cephei-type and Slowly Pulsating B stars, as well as in the Sun. For instance, in Beta Cephei-type stars (which should not be confused with Cepheid variables), the iron-group opacity peak excites acoustic modes through the kappa-mechanism. A particular attention is paid to the higher-than-predicted iron opacity measured on Sandia's Z facility at solar interior conditions (boundary of the convective zone). We discuss some theoretical aspects such as orbital relaxation, electron collisional broadening, ionic Stark effect, oscillator-strength sum rules, photo-ionization, or the ``filling-the-gap'' effect of highly excited states.
Research on the iterative method for model updating based on the frequency response function
Institute of Scientific and Technical Information of China (English)
Wei-Ming Li; Jia-Zhen Hong
2012-01-01
Model reduction technique is usually employed in model updating process,In this paper,a new model updating method named as cross-model cross-frequency response function (CMCF) method is proposed and a new iterative method associating the model updating method with the model reduction technique is investigated.The new model updating method utilizes the frequency response function to avoid the modal analysis process and it does not need to pair or scale the measured and the analytical frequency response function,which could greatly increase the number of the equations and the updating parameters.Based on the traditional iterative method,a correction term related to the errors resulting from the replacement of the reduction matrix of the experimental model with that of the finite element model is added in the new iterative method.Comparisons between the traditional iterative method and the proposed iterative method are shown by model updating examples of solar panels,and both of these two iterative methods combine the CMCF method and the succession-level approximate reduction technique.Results show the effectiveness of the CMCF method and the proposed iterative method.
Model calculation of thermal conductivity in antiferromagnets
Energy Technology Data Exchange (ETDEWEB)
Mikhail, I.F.I., E-mail: ifi_mikhail@hotmail.com; Ismail, I.M.M.; Ameen, M.
2015-11-01
A theoretical study is given of thermal conductivity in antiferromagnetic materials. The study has the advantage that the three-phonon interactions as well as the magnon phonon interactions have been represented by model operators that preserve the important properties of the exact collision operators. A new expression for thermal conductivity has been derived that involves the same terms obtained in our previous work in addition to two new terms. These two terms represent the conservation and quasi-conservation of wavevector that occur in the three-phonon Normal and Umklapp processes respectively. They gave appreciable contributions to the thermal conductivity and have led to an excellent quantitative agreement with the experimental measurements of the antiferromagnet FeCl{sub 2}. - Highlights: • The Boltzmann equations of phonons and magnons in antiferromagnets have been studied. • Model operators have been used to represent the magnon–phonon and three-phonon interactions. • The models possess the same important properties as the exact operators. • A new expression for the thermal conductivity has been derived. • The results showed a good quantitative agreement with the experimental data of FeCl{sub 2}.
Execution model for limited ambiguity rules and its application to derived data update
Energy Technology Data Exchange (ETDEWEB)
Chen, I.M.A. [Lawrence Berkeley National Lab., CA (United States); Hull, R. [Univ. of Colorado, Boulder, CO (United States); McLeod, D. [Univ. of Southern California, Los Angeles, CA (United States)
1995-12-01
A novel execution model for rule application in active databases is developed and applied to the problem of updating derived data in a database represented using a semantic, object-based database model. The execution model is based on the use of `limited ambiguity rules` (LARs), which permit disjunction in rule actions. The execution model essentially performs a breadth-first exploration of alternative extensions of a user-requested update. Given an object-based database scheme, both integrity constraints and specifications of derived classes and attributes are compiled into a family of limited ambiguity rules. A theoretical analysis shows that the approach is sound: the execution model returns all valid `completions` of a user-requested update, or terminates with an appropriate error notification. The complexity of the approach in connection with derived data update is considered. 42 refs., 10 figs., 3 tabs.
An inner-outer nonlinear programming approach for constrained quadratic matrix model updating
Andretta, M.; Birgin, E. G.; Raydan, M.
2016-01-01
The Quadratic Finite Element Model Updating Problem (QFEMUP) concerns with updating a symmetric second-order finite element model so that it remains symmetric and the updated model reproduces a given set of desired eigenvalues and eigenvectors by replacing the corresponding ones from the original model. Taking advantage of the special structure of the constraint set, it is first shown that the QFEMUP can be formulated as a suitable constrained nonlinear programming problem. Using this formulation, a method based on successive optimizations is then proposed and analyzed. To avoid that spurious modes (eigenvectors) appear in the frequency range of interest (eigenvalues) after the model has been updated, additional constraints based on a quadratic Rayleigh quotient are dynamically included in the constraint set. A distinct practical feature of the proposed method is that it can be implemented by computing only a few eigenvalues and eigenvectors of the associated quadratic matrix pencil.
Uncertainty quantification of voice signal production mechanical model and experimental updating
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is
DEFF Research Database (Denmark)
Hansen, Lisbeth S.; Borup, Morten; Møller, A.;
2011-01-01
the performance of the updating procedure for flow forecasting. Measured water levels in combination with rain gauge input are used as basis for the evaluation. When compared to simulations without updating, the results show that it is possible to obtain an improvement in the 20 minute forecast of the water level...... to eliminate some of the unavoidable discrepancies between model and reality. The latter can partly be achieved by using the commercial tool MOUSE UPDATE, which is capable of inserting measured water levels from the system into the distributed, physically based MOUSE model. This study evaluates and documents...
Shell model calculations of 109Sb in the sdgh shell
Dikmen, E.; Novoselsky, A.; Vallieres, M.
2001-12-01
The energy spectra of the antimony isotope 109Sb in the sdgh shell are calculated in the nuclear shell model approach by using the CD-Bonn nucleon-nucleon interaction. The modified Drexel University parallel shell model code (DUPSM) was used for the calculations with maximum Hamiltonian dimension of 762 253 of 5.14% sparsity. The energy levels are compared to the recent experimental results. The calculations were done on the Cyborg Parallel Cluster System at Drexel University.
[Calculation of parameters in forest evapotranspiration model].
Wang, Anzhi; Pei, Tiefan
2003-12-01
Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.
Swartjes F; ECO
2003-01-01
Twenty scenarios, differing with respect to land use, soil type and contaminant, formed the basis for calculating human exposure from soil contaminants with the use of models contributed by seven European countries (one model per country). Here, the human exposures to children and children
Active Magnetic Bearing Rotor Model Updating Using Resonance and MAC Error
Directory of Open Access Journals (Sweden)
Yuanping Xu
2015-01-01
Full Text Available Modern control techniques can improve the performance and robustness of a rotor active magnetic bearing (AMB system. Since those control methods usually rely on system models, it is important to obtain a precise rotor AMB analytical model. However, the interference fits and shrink effects of rotor AMB cause inaccuracy to the final system model. In this paper, an experiment based model updating method is proposed to improve the accuracy of the finite element (FE model used in a rotor AMB system. Modelling error is minimized by applying a numerical optimization Nelder-Mead simplex algorithm to properly adjust FE model parameters. Both the error resonance frequencies and modal assurance criterion (MAC values are minimized simultaneously to account for the rotor natural frequencies as well as for the mode shapes. Verification of the updated rotor model is performed by comparing the experimental and analytical frequency response. The close agreements demonstrate the effectiveness of the proposed model updating methodology.
Directory of Open Access Journals (Sweden)
Lisbet Sneftrup Hansen
2014-07-01
Full Text Available There is a growing requirement to generate more precise model simulations and forecasts of flows in urban drainage systems in both offline and online situations. Data assimilation tools are hence needed to make it possible to include system measurements in distributed, physically-based urban drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation, and then evaluates and documents the performance of this particular updating procedure for flow forecasting. A hypothetical case study and synthetic observations are used to illustrate how the Update method works and affects downstream nodes. A real case study in a 544 ha urban catchment furthermore shows that it is possible to improve the 20-min forecast of water levels in an updated node and the three-hour forecast of flow through a downstream node, compared to simulations without updating. Deterministic water level updating produces better forecasts when implemented in large networks with slow flow dynamics and with measurements from upstream basins that contribute significantly to the flow at the forecast location.
A methodology for constructing the calculation model of scientific spreadsheets
Vos, de, Ans; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.
2015-01-01
Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are not able to fully understand how the research results are calculated, and trace them back to the underlying spreadsheets. This paper proposes a methodology for semi-automatically deriving the calcu...
Modeling huge sound sources in a room acoustical calculation program
DEFF Research Database (Denmark)
Christensen, Claus Lynge
1999-01-01
A room acoustical model capable of modeling point sources, line sources, and surface sources is presented. Line and surface sources are modeled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces of the room. Point sources are modeled using a hybrid calculation...... method combining this ray-tracing method with image source modeling. With these three source types it is possible to model huge and complex sound sources in industrial environments. Compared to a calculation with only point sources, the use of extended sound sources is shown to improve the agreement...
Modeling Large sound sources in a room acoustical calculation program
DEFF Research Database (Denmark)
Christensen, Claus Lynge
1999-01-01
A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....
Modeling Large sound sources in a room acoustical calculation program
DEFF Research Database (Denmark)
Christensen, Claus Lynge
1999-01-01
A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
Inherently irrational? A computational model of escalation of commitment as Bayesian Updating.
Gilroy, Shawn P; Hantula, Donald A
2016-06-01
Monte Carlo simulations were performed to analyze the degree to which two-, three- and four-step learning histories of losses and gains correlated with escalation and persistence in extended extinction (continuous loss) conditions. Simulated learning histories were randomly generated at varying lengths and compositions and warranted probabilities were determined using Bayesian Updating methods. Bayesian Updating predicted instances where particular learning sequences were more likely to engender escalation and persistence under extinction conditions. All simulations revealed greater rates of escalation and persistence in the presence of heterogeneous (e.g., both Wins and Losses) lag sequences, with substantially increased rates of escalation when lags comprised predominantly of losses were followed by wins. These methods were then applied to human investment choices in earlier experiments. The Bayesian Updating models corresponded with data obtained from these experiments. These findings suggest that Bayesian Updating can be utilized as a model for understanding how and when individual commitment may escalate and persist despite continued failures.
Liu, Yang; Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters.
Directory of Open Access Journals (Sweden)
Yang Liu
2014-01-01
Full Text Available Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM. Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters.
Innovative Product Design Based on Customer Requirement Weight Calculation Model
Institute of Scientific and Technical Information of China (English)
Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang
2010-01-01
In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.
Modeling Conservative Updates in Multi-Hash Approximate Count Sketches
2012-01-01
Multi-hash-based count sketches are fast and memory efficient probabilistic data structures that are widely used in scalable online traffic monitoring applications. Their accuracy significantly improves with an optimization, called conservative update, which is especially effective when the aim is to discriminate a relatively small number of heavy hitters in a traffic stream consisting of an extremely large number of flows. Despite its widespread application, a thorough u...
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg; Devin A. Steuhm
2011-09-01
, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
Towards cost-sensitive adaptation: when is it worth updating your predictive model?
Zliobaite, Indre; Budka, Marcin; Stahl, Frederic
2015-01-01
Our digital universe is rapidly expanding, more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possib...
DEFF Research Database (Denmark)
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...... to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon...... the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either...
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data......When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...... is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple timearea model and historic rain series...
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Calculation of Al-Zn diagram from central atoms model
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
A slightly modified central atoms model was proposed. The probabilities of various clusters with the central atoms and their nearest neighboring shells can be calculated neglecting the assumption of the param eter of energy in the central atoms model in proportion to the number of other atoms i (referred with the central atom). A parameter Pα is proposed in this model, which equals to reciprocal of activity coefficient of a component, therefore, the new model can be understood easily. By this model, the Al-Zn phase diagram and its thermodynamic properties were calculated, the results coincide with the experimental data.
New Calculations in Dirac Gaugino Models: Operators, Expansions, and Effects
Carpenter, Linda M
2015-01-01
In this work we calculate important one loop SUSY-breaking parameters in models with Dirac gauginos, which are implied by the existence of heavy messenger fields. We find that these SUSY-breaking effects are all related by a small number of parameters, thus the general theory is tightly predictive. In order to make the most accurate analyses of one loop effects, we introduce calculations using an expansion in SUSY breaking messenger mass, rather than relying on postulating the forms of effective operators. We use this expansion to calculate one loop contributions to gaugino masses, non-holomorphic SM adjoint masses, new A-like and B-like terms, and linear terms. We also test the Higgs potential in such models, and calculate one loop contributions to the Higgs mass in certain limits of R-symmetric models, finding a very large contribution in many regions of the $\\mu$-less MSSM, where Higgs fields couple to standard model adjoint fields.
Modeling and Calculator Tools for State and Local Transportation Resources
Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
Hanson, D.; Waters, T. P.; Thompson, D. J.; Randall, R. B.; Ford, R. A. J.
2007-01-01
Finite element model updating traditionally makes use of both resonance and modeshape information. The mode shape information can also be obtained from anti-resonance frequencies, as has been suggested by a number of researchers in recent years. Anti-resonance frequencies have the advantage over mode shapes that they can be much more accurately identified from measured frequency response functions. Moreover, anti-resonance frequencies can, in principle, be estimated from output-only measurements on operating machinery. The motivation behind this paper is to explore whether the availability of anti-resonances from such output-only techniques would add genuinely new information to the model updating process, which is not already available from using only resonance frequencies. This investigation employs two-degree-of-freedom models of a rigid beam supported on two springs. It includes an assessment of the contribution made to the overall anti-resonance sensitivity by the mode shape components, and also considers model updating through Monte Carlo simulations, experimental verification of the simulation results, and application to a practical mechanical system, in this case a petrol generator set. Analytical expressions are derived for the sensitivity of anti-resonance frequencies to updating parameters such as the ratio of spring stiffnesses, the position of the centre of gravity, and the beam's radius of gyration. These anti-resonance sensitivities are written in terms of natural frequency and mode shape sensitivities so their relative contributions can be assessed. It is found that the contribution made by the mode shape sensitivity varies considerably depending on the value of the parameters, contributing no new information for significant combinations of parameter values. The Monte Carlo simulations compare the performance of the update achieved when using information from: the resonances only; the resonances and either anti-resonance; and the resonances and both
Modelling Large sound sources in a room acoustical calculation program
DEFF Research Database (Denmark)
Christensen, Claus Lynge
1999-01-01
A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....
Modelling Large sound sources in a room acoustical calculation program
DEFF Research Database (Denmark)
Christensen, Claus Lynge
1999-01-01
A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....
COMPARISON BETWEEN MODELS FOR CALCULATION OF INDUSTRIAL HOT ROLLING LOADS
Antonio Augusto Gorni; Marcos Roberto Soares da Silva
2012-01-01
An evaluation is made about the precision of hot strip rolling mill loads at the F1 stand calculated according to the theoretical models of Orowan, Sims, Alexander-Ford, Orowan-Pascoe, Ekelund and Tselikov in comparison to real values got for carbon-manganese steels. In the deterministic approach, without any fit of the calculated values to real data, Orowan, Sims and Alexander-Models show best levels of precision, as expected from the information got in the literature. However, i...
Calculation of Thermodynamic Parameters for Freundlich and Temkin Isotherm Models
Institute of Scientific and Technical Information of China (English)
ZHANGZENGQIANG; ZHANGYIPING; 等
1999-01-01
Derivation of the Freundlich and Temkin isotherm models from the kinetic adsorption/desorption equations was carried out to calculate their thermodynamic equilibrium constants.The calculation formulase of three thermodynamic parameters,the standard molar Gibbs free energy change,the standard molar enthalpy change and the standard molar entropy change,of isothermal adsorption processes for Freundlich and Temkin isotherm models were deduced according to the relationship between the thermodynamic equilibrium constants and the temperature.
Effective UV radiation from model calculations and measurements
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
COMPARISON BETWEEN MODELS FOR CALCULATION OF INDUSTRIAL HOT ROLLING LOADS
Directory of Open Access Journals (Sweden)
Antonio Augusto Gorni
2012-09-01
Full Text Available An evaluation is made about the precision of hot strip rolling mill loads at the F1 stand calculated according to the theoretical models of Orowan, Sims, Alexander-Ford, Orowan-Pascoe, Ekelund and Tselikov in comparison to real values got for carbon-manganese steels. In the deterministic approach, without any fit of the calculated values to real data, Orowan, Sims and Alexander-Models show best levels of precision, as expected from the information got in the literature. However, in the semi-empirical approach, after a linear fit between calculated values and real data, Tselikov and Ekelund models show better adequacy to the industrial data, a fact that can be attributed to more significant errors occurring in the sub-models of temperature, tribology and hot strength than in the rolling load models. For its turn, neural network models show the best levels of precision which make very attractive the adoption of this approach.
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Ambient modal testing of a double-arch dam: the experimental campaign and model updating
García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco
2016-09-01
A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system.
Institute of Scientific and Technical Information of China (English)
LIDian-qing; ZHANGSheng-kun
2004-01-01
The classical probability theory cannot effectively quantify the parameter uncertainty in probability of detection.Furthermore,the conventional data analytic method and expert judgment method fail to handle the problem of model uncertainty updating with the information from nondestructive inspection.To overcome these disadvantages,a Bayesian approach was proposed to quantify the parameter uncertainty in probability of detection.Furthermore,the formulae of the multiplication factors to measure the statistical uncertainties in the probability of detection following the Weibull distribution were derived.A Bayesian updating method was applied to compute the posterior probabilities of model weights and the posterior probability density functions of distribution parameters of probability of detection.A total probability model method was proposed to analyze the problem of multi-layered model uncertainty updating.This method was then applied to the problem of multilayered corrosion model uncertainty updating for ship structures.The results indicate that the proposed method is very effective in analyzing the problem of multi-layered model uncertainty updating.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating.
Lee, Young-Joo; Cho, Soojin
2016-03-02
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed.
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
Energy Technology Data Exchange (ETDEWEB)
Barton, E; Crocker, L [Structural health monitoring, National Physical Laboratory, Hampton Road, Teddington, Middlesex, TW11 0LW (United Kingdom); Middleton, C; Koo, K; Brownjohn, J, E-mail: elena.barton@npl.co.uk, E-mail: C.J.Middleton@sheffield.ac.uk, E-mail: k.koo@sheffield.ac.uk, E-mail: louise.crocker@npl.co.uk, E-mail: j.brownjohn@sheffield.ac.uk [University of Sheffield, Department of Civil and Structural Engineering, Vibration Engineering Research Section, Sir Frederick Mappin Building Mappin Street, Sheffield, S1 3JD (United Kingdom)
2011-07-19
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
Institute of Scientific and Technical Information of China (English)
LI Hong-wei; YANG He; SUN Zhi-chao
2006-01-01
Computational stability and efficiency are the key problems for numerical modeling of crystal plasticity,which will limit its development and application in finite element (FE) simulation evidently. Since implicit iterative algorithms are inefficient and have difficulty to determine initial values,an explicit incremental-update algorithm for the elasto-viscoplastic constitutive relation was developed in the intermediate frame by using the second Piola-Kirchoff (P-K) stress and Green stain. The increment of stress and slip resistance were solved by a calculation loop of linear equations sets. The reorientation of the crystal as well as the elastic strain can be obtained from a polar decomposition of the elastic deformation gradient. User material subroutine VUMAT was developed to combine crystal elasto-viscoplastic constitutive model with ABAQUS/Explicit. Numerical studies were performed on a cubic upset model with OFHC material (FCC crystal). The comparison of the numerical results with those obtained by implicit iterative algorithm and those from experiments demonstrates that the explicit algorithm is reliable. Furthermore,the effect rules of material anisotropy,rate sensitivity coefficient (RSC) and loading speeds on the deformation were studied. The numerical studies indicate that the explicit algorithm is suitable and efficient for large deformation analyses where anisotropy due to texture is important.
Dynamic finite element model updating of prestressed concrete continuous box-girder bridge
Institute of Scientific and Technical Information of China (English)
Lin Xiankun; Zhang Lingmi; Guo Qintao; Zhang Yufeng
2009-01-01
The dynamic finite element model (FEM) of a prestressed concrete continuous box-girder bridge, called the Tongyang Canal Bridge, is built and updated based on the results of ambient vibration testing (AVT) using a real-coded accelerating genetic algorithm (RAGA). The objective functions are defined based on natural frequency and modal assurance criterion (MAC) metrics to evaluate the updated FEM. Two objective functions are defined to fully account for the relative errors and standard deviations of the natural frequencies and MAC between the AVT results and the updated FEM predictions. The dynamically updated FEM of the bridge can better represent its structural dynamics and serve as a baseline in long-term health monitoring, condition assessment and damage identification over the service life of the bridge.
Bakir, Pelin Gundes; Reynders, Edwin; De Roeck, Guido
2007-08-01
The use of changes in dynamic system characteristics to detect damage has received considerable attention during the last years. Within this context, FE model updating technique, which belongs to the class of inverse problems in classical mechanics, is used to detect, locate and quantify damage. In this study, a sensitivity-based finite element (FE) model updating scheme using a trust region algorithm is developed and implemented in a complex structure. A damage scenario is applied on the structure in which the stiffness values of the beam elements close to the beam-column joints are decreased by stiffness reduction factors. A worst case and complex damage pattern is assumed such that the stiffnesses of adjacent elements are decreased by substantially different stiffness reduction factors. The objective of the model updating is to minimize the differences between the eigenfrequency and eigenmodes residuals. The updating parameters of the structure are the stiffness reduction factors. The changes of these parameters are determined iteratively by solving a nonlinear constrained optimization problem. The FE model updating algorithm is also tested in the presence of two levels of noise in simulated measurements. In all three cases, the updated MAC values are above 99% and the relative eigenfrequency differences improve substantially after model updating. In cases without noise and with moderate levels of noise; detection, localization and quantification of damage are successfully accomplished. In the case with substantially noisy measurements, detection and localization of damage are successfully realized. Damage quantification is also promising in the presence of high noise as the algorithm can still predict 18 out of 24 damage parameters relatively accurately in that case.
Study on neural network model for calculating subsidence factor
Institute of Scientific and Technical Information of China (English)
GUO Wen-bing; ZHANG Jie
2007-01-01
The major factors influencing subsidence factor were comprehensively analyzed. Then the artificial neural network model for calculating subsidence factor was set up with the theory of artificial neural network (ANN). A large amount of data from observation stations in China was collected and used as learning and training samples to train and test the artificial neural network model. The calculated results of the ANN model and the observed values were compared and analyzed in this paper. The results demonstrate that many factors can be considered in this model and the result is more precise and closer to observed values to calculate the subsidence factor by the ANN model. It can satisfy the need of engineering.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
Energy Technology Data Exchange (ETDEWEB)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu [Laboratory of Genetics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715 (United States)
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.
Updated Peach Bottom Model for MELCOR 1.8.6: Description and Comparisons
Energy Technology Data Exchange (ETDEWEB)
Robb, Kevin R [ORNL
2014-09-01
A MELCOR 1.8.5 model of the Peach Bottom Unit 2 or 3 has been updated for MELCOR 1.8.6. Primarily, this update involved modification of the lower head modeling. Three additional updates were also performed. First, a finer nodalization of the containment wet well was employed. Second, the pressure differential used by the logic controlling the safety relief valve actuation was modified. Finally, an additional stochastic failure mechanism for the safety relief valves was added. Simulation results from models with and without the modifications were compared. All the analysis was performed by comparing key figures of merit from simulations of a long-term station blackout scenario. This report describes the model changes and the results of the comparisons.
A MODEL OF FUZZY CALCULATION OF THE CONSTUCTION COST
Institute of Scientific and Technical Information of China (English)
邵良杉; 叶景楼; 李东
1998-01-01
An overview of the delelopment of approaches to construction cost and price forcasting since the 1950's is given. First, second and third generation models can be identified, but they all have shortcomings. This paper puts forward a new model, fuzzy calculation model, based on lots of data of the finished projects. Through actual application, it is proved that the model is accurate and quick in calcalation of construction.
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)
Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2017-07-01
The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the
Modeling of overhead transmission lines for lightning overvoltage calculations
Energy Technology Data Exchange (ETDEWEB)
Martinez-Velasco, J.A.; Castro-Aranda, F.
2010-10-15
This article discussed the modelling of overhead transmission lines for lightning overvoltage calculations. Such a model must include those parts of the line that get involved when a lightning return stroke hits a wire or a tower and that have some influence on the voltage developed across insulator strings. Modelling guidelines differ depending on whether the goal is to estimate overvoltages or to determine arrester energy stresses. Modelling guidelines were summarized for each component, including shield wires and phase conductors; transmission line towers; insulators; phase voltages at the instant lightning hits the line; surge arresters; and the lightning stroke. The applied Monte Carlo procedure was summarized. For line span models, a constant-parameter model generally suffices when the goal is to calculate overvoltages across insulators or to obtain the flashover rate, but a frequency-dependent parameter model is necessary to estimate the energy discharged by arresters. The model selected for representing towers can have some influence on both flashover rates and arrester energy stresses. The representation of footing impedances is critical for calculating overvoltages and arrester energy stresses, but different modelling techniques produce significantly different results. The models are limited in that the corona effect is not included in the line models, the voltages induced by the electric and magnetic fields of lightning channels to shield wires and phase conductors are neglected, and the footing models are too simple, but they are nonetheless realistic approaches for simulating lightning effects. 2 tabs., 9 figs.
batman: BAsic Transit Model cAlculatioN in Python
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
batman: BAsic Transit Model cAlculatioN in Python
Kreidberg, Laura
2015-01-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Directory of Open Access Journals (Sweden)
Changxin Gao
2016-04-01
Full Text Available Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors, which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Function-weighted frequency response function sensitivity method for analytical model updating
Lin, R. M.
2017-09-01
Since the frequency response function (FRF) sensitivity method was first proposed [26], it has since become a most powerful and practical method for analytical model updating. Nevertheless, the original formulation of the FRF sensitivity method does suffer the limitation that the initial analytical model to be updated should be reasonably close to the final updated model to be sought, due the assumed mathematical first order approximation implicit to most sensitivity based methods. Convergence to correct model is not guaranteed when large modelling errors exist and blind application often leads to optimal solutions which are truly sought. This paper seeks to examine all the important numerical characteristics of the original FRF sensitivity method including frequency data selection, numerical balance and convergence performance. To further improve the applicability of the method to cases of large modelling errors, a new novel function-weighted sensitivity method is developed. The new method has shown much superior performance on convergence even in the presence of large modelling errors. Extensive numerical case studies based on a mass-spring system and a GARTEUR structure have been conducted and very encouraging results have been achieved. Effect of measurement noise has been examined and the method works reasonably well in the presence of measurement uncertainties. The new method removes the restriction of modelling error magnitude being of second order in Euclidean norm as compared with that of system matrices, thereby making it a truly general method applicable to most practical model updating problems.
Red Giant Oscillations: Stellar Models and Mode Frequency Calculations
DEFF Research Database (Denmark)
Jendreieck, A.; Weiss, A.; Aguirre, Victor Silva
2012-01-01
We present preliminary results on modelling KIC 7693833, the so far most metal-poor red-giant star observed by {\\it Kepler}. From time series spanning several months, global oscillation parameters and individual frequencies were obtained and compared to theoretical calculations. Evolution models ......_\\odot$ in radius and of about 2.5 Gyr in age....
Statistical Model Calculations for (n,γ Reactions
Directory of Open Access Journals (Sweden)
Beard Mary
2015-01-01
Full Text Available Hauser-Feshbach (HF cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that diﬀerent nuclear input models sensitively aﬀect HF cross section calculations. Less well known however are the eﬀects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters, as well as eﬀects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the eﬀects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative eﬀects of these model details will be discussed.
MODELING THE EFFECTS OF UPDATING THE INFLUENZA VACCINE ON THE EFFICACY OF REPEATED VACCINATION.
Energy Technology Data Exchange (ETDEWEB)
D. SMITH; A. LAPEDES; ET AL
2000-11-01
The accumulated wisdom is to update the vaccine strain to the expected epidemic strain only when there is at least a 4-fold difference [measured by the hemagglutination inhibition (HI) assay] between the current vaccine strain and the expected epidemic strain. In this study we investigate the effect, on repeat vaccines, of updating the vaccine when there is a less than 4-fold difference. Methods: Using a computer model of the immune response to repeated vaccination, we simulated updating the vaccine on a 2-fold difference and compared this to not updating the vaccine, in each case predicting the vaccine efficacy in first-time and repeat vaccines for a variety of possible epidemic strains. Results: Updating the vaccine strain on a 2-fold difference resulted in increased vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged. Conclusions: These results suggest that updating the vaccine strain on a 2-fold difference between the existing vaccine strain and the expected epidemic strain will increase vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged.
Ciuchini, Marco; Mishima, Satoshi; Pierini, Maurizio; Reina, Laura; Silvestrini, Luca
2014-01-01
We present updated global fits of the Standard Model and beyond to electroweak precision data, taking into account recent progress in theoretical calculations and experimental measurements. From the fits, we derive model-independent constraints on new physics by introducing oblique and epsilon parameters, and modified $Zb\\bar{b}$ and $HVV$ couplings. Furthermore, we also perform fits of the scale factors of the Higgs-boson couplings to observed signal strengths of the Higgs boson.
Active control and parameter updating techniques for nonlinear thermal network models
Papalexandris, M. V.; Milman, M. H.
The present article reports on active control and parameter updating techniques for thermal models based on the network approach. Emphasis is placed on applications where radiation plays a dominant role. Examples of such applications are the thermal design and modeling of spacecrafts and space-based science instruments. Active thermal control of a system aims to approximate a desired temperature distribution or to minimize a suitably defined temperature-dependent functional. Similarly, parameter updating aims to update the values of certain parameters of the thermal model so that the output approximates a distribution obtained through direct measurements. Both problems are formulated as nonlinear, least-square optimization problems. The proposed strategies for their solution are explained in detail and their efficiency is demonstrated through numerical tests. Finally, certain theoretical results pertaining to the characterization of solutions of the problems of interest are also presented.
Adaptive update using visual models for lifting-based motion-compensated temporal filtering
Li, Song; Xiong, H. K.; Wu, Feng; Chen, Hong
2005-03-01
Motion compensated temporal filtering is a useful framework for fully scalable video compression schemes. However, when supposed motion models cannot represent a real motion perfectly, both the temporal high and the temporal low frequency sub-bands may contain artificial edges, which possibly lead to a decreased coding efficiency, and ghost artifacts appear in the reconstructed video sequence at lower bit rates or in case of temporal scaling. We propose a new technique that is based on utilizing visual models to mitigate ghosting artifacts in the temporal low frequency sub-bands. Specifically, we propose content adaptive update schemes where visual models are used to determine image dependent upper bounds on information to be updated. Experimental results show that the proposed algorithm can significantly improve subjective visual quality of the low-pass temporal frames and at the same time, coding performance can catch or exceed the classical update steps.
Dynamic test and finite element model updating of bridge structures based on ambient vibration
Institute of Scientific and Technical Information of China (English)
2008-01-01
The dynamic characteristics of bridge structures are the basis of structural dynamic response and seismic analysis,and are also an important target of health condition monitoring.In this paper,a three-dimensional finite-element model is first established for a highway bridge over a railroad on No.312 National Highway.Based on design drawings,the dynamic characteristics of the bridge are studied using finite element analysis and ambient vibration measurements.Thus,a set of data is selected based on sensitivity analysis and optimization theory;the finite element model of the bridge is updated.The numerical and experimental results show that the updated method is more simple and effective,the updated finite element model can reflect the dynamic characteristics of the bridge better,and it can be used to predict the dynamic response under complex external forces.It is also helpful for further damage identification and health condition monitoring.
Calculating osmotic pressure according to nonelectrolyte Wilson nonrandom factor model.
Li, Hui; Zhan, Tingting; Zhan, Xiancheng; Wang, Xiaolan; Tan, Xiaoying; Guo, Yiping; Li, Chengrong
2014-08-01
Abstract The osmotic pressure of NaCl solutions was determined by the air humidity in equilibrium (AHE) method. The relationship between the osmotic pressure and the concentration was explored theoretically, and the osmotic pressure was calculated according to the nonelectrolyte Wilson nonrandom factor (N-Wilson-NRF) model from the concentration. The results indicate that the calculated osmotic pressure is comparable to the measured one.
Theoretical Model Calculation for d + 8Li Reaction
Institute of Scientific and Technical Information of China (English)
HAN Yin-Lu; GUO Hai-Rui; ZHANG Yue; ZHANG Jing-Shang
2008-01-01
Based on the theoretical models for light nuclei, the calculations of reaction cross sections and the angular distributions for d+8Li reaction are performed. Since all of the particle emissions are from the compound nucleus to the discrete levels, the angular momentum coupling effect in pre-equilibrium mechanism is taken into account. The three-body break-up process and the recoil effect are involved. The theoretical calculated results are compared to existing experimental data.
Adherence of Model Molecules to Silica Surfaces: First Principle Calculations
Nuñez, Matías; Prado, Miguel Oscar
The adherence of "model molecules" methylene blue and eosine Y ("positive" and "negatively" charged respectively) to crystal SiO2 surfaces is studied from first principle calculations at the DFT level. Adsorption energies are calculated which follow the experimental threads obtained elsewhere (Rivera et al., 2013). We study the quantum nature of the electronic charge transfer between the surface and the molecules, showing the localized and delocalized patterns associated to the repulsive and attractive case respectively.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Institute of Scientific and Technical Information of China (English)
He Huan; Chen Zhe; He Cheng; Ni Lei; Chen Guoping
2015-01-01
In this paper, we propose an impact finite element (FE) model for an airbag landing buf-fer system. First, an impact FE model has been formulated for a typical airbag landing buffer sys-tem. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experi-mental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs) to evalu-ate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR) to serve as a modified objective function. A radial basis function (RBF) is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Directory of Open Access Journals (Sweden)
He Huan
2015-12-01
Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Real-Time Flood Forecasting System Using Channel Flow Routing Model with Updating by Particle Filter
Kudo, R.; Chikamori, H.; Nagai, A.
2008-12-01
A real-time flood forecasting system using channel flow routing model was developed for runoff forecasting at water gauged and ungaged points along river channels. The system is based on a flood runoff model composed of upstream part models, tributary part models and downstream part models. The upstream part models and tributary part models are lumped rainfall-runoff models, and the downstream part models consist of a lumped rainfall-runoff model for hillslopes adjacent to a river channel and a kinematic flow routing model for a river channel. The flow forecast of this model is updated by Particle filtering of the downstream part model as well as by the extended Kalman filtering of the upstream part model and the tributary part models. The Particle filtering is a simple and powerful updating algorithm for non-linear and non-gaussian system, so that it can be easily applied to the downstream part model without complicated linearization. The presented flood runoff model has an advantage in simlecity of updating procedure to the grid-based distributed models, which is because of less number of state variables. This system was applied to the Gono-kawa River Basin in Japan, and flood forecasting accuracy of the system with both Particle filtering and extended Kalman filtering and that of the system with only extended Kalman filtering were compared. In this study, water gauging stations in the objective basin were divided into two types of stations, that is, reference stations and verification stations. Reference stations ware regarded as ordinary water gauging stations and observed data at these stations are used for calibration and updating of the model. Verification stations ware considered as ungaged or arbitrary points and observed data at these stations are used not for calibration nor updating but for only evaluation of forecasting accuracy. The result confirms that Particle filtering of the downstream part model improves forecasting accuracy of runoff at
Finite element modelling and updating of friction stir welding (FSW joint for vibration analysis
Directory of Open Access Journals (Sweden)
Zahari Siti Norazila
2017-01-01
Full Text Available Friction stir welding of aluminium alloys widely used in automotive and aerospace application due to its advanced and lightweight properties. The behaviour of FSW joints plays a significant role in the dynamic characteristic of the structure due to its complexities and uncertainties therefore the representation of an accurate finite element model of these joints become a research issue. In this paper, various finite elements (FE modelling technique for prediction of dynamic properties of sheet metal jointed by friction stir welding will be presented. Firstly, nine set of flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by FSW are used. Nine set of specimen was fabricated using various types of welding parameters. In order to find the most optimum set of FSW plate, the finite element model using equivalence technique was developed and the model validated using experimental modal analysis (EMA on nine set of specimen and finite element analysis (FEA. Three types of modelling were engaged in this study; rigid body element Type 2 (RBE2, bar element (CBAR and spot weld element connector (CWELD. CBAR element was chosen to represent weld model for FSW joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, total error of the natural frequencies for CBAR model is improved significantly. Therefore, CBAR element was selected as the most reliable element in FE to represent FSW weld joint.
Calculation of extreme wind atlases using mesoscale modeling. Final report
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Badger, Jake
This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies...... as explained in Section 2. The focus has been put on developing a number of new methodologies through numerical modeling and statistical modeling....
Qi, Feng; Tavakol, Vahid; Ocket, Ilja; Xu, Peng; Schreurs, Dominique; Wang, Jinkuan; Nauwelaers, Bart
2010-01-01
Active millimeter wave imaging systems have become a promising candidate for indoor security applications and industrial inspection. However, there is a lack of simulation tools at the system level. We introduce and evaluate two modeling approaches that are applied to active millimeter wave imaging systems. The first approach originates in Fourier optics and concerns the calculation in the spatial frequency domain. The second approach is based on wave propagation and corresponds to calculation in the spatial domain. We compare the two approaches in the case of both rough and smooth objects and point out that the spatial frequency domain calculation may suffer from a large error in amplitude of 50% in the case of rough objects. The comparison demonstrates that the concepts of point-spread function and f-number should be applied with careful consideration in coherent millimeter wave imaging systems. In the case of indoor applications, the near-field effect should be considered, and this is included in the spatial domain calculation.
A new Gibbs sampling based algorithm for Bayesian model updating with incomplete complex modal data
Cheung, Sai Hung; Bansal, Sahil
2017-08-01
Model updating using measured system dynamic response has a wide range of applications in system response evaluation and control, health monitoring, or reliability and risk assessment. In this paper, we are interested in model updating of a linear dynamic system with non-classical damping based on incomplete modal data including modal frequencies, damping ratios and partial complex mode shapes of some of the dominant modes. In the proposed algorithm, the identification model is based on a linear structural model where the mass and stiffness matrix are represented as a linear sum of contribution of the corresponding mass and stiffness matrices from the individual prescribed substructures, and the damping matrix is represented as a sum of individual substructures in the case of viscous damping, in terms of mass and stiffness matrices in the case of Rayleigh damping or a combination of the former. To quantify the uncertainties and plausibility of the model parameters, a Bayesian approach is developed. A new Gibbs-sampling based algorithm is proposed that allows for an efficient update of the probability distribution of the model parameters. In addition to the model parameters, the probability distribution of complete mode shapes is also updated. Convergence issues and numerical issues arising in the case of high-dimensionality of the problem are addressed and solutions to tackle these problems are proposed. The effectiveness and efficiency of the proposed method are illustrated by numerical examples with complex modes.
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
Energy Technology Data Exchange (ETDEWEB)
P. Marnier
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).
A Calculation Model for Corrosion Cracking in RC Structures
Institute of Scientific and Technical Information of China (English)
Xu Gang; Wei Jun; Zhang Keqiang; Zhou Xiwu
2007-01-01
A novel calculation model is proposed aiming at the problem of concrete cover cracking induced by reinforcement corrosion. In this article, the relationship between the corrosion depth of the bar and the thickness of the rust layer is established. By deducing the radial displacement expression of concrete, the formula for corrosion depth and corrosion pressure before cracking is proposed. The crack depth of cover in accordance with the maximum corrosion pressure is deduced; furthermore, the corrosion depth and corrosion pressure at the cracking time are obtained. Finally, the theoretical model is validated by several experiments, and the calculated values agree well with the experiment results.
Recent Advances in Shell Evolution with Shell-Model Calculations
Utsuno, Yutaka; Tsunoda, Yusuke; Shimizu, Noritaka; Honma, Michio; Togashi, Tomoaki; Mizusaki, Takahiro
2014-01-01
Shell evolution in exotic nuclei is investigated with large-scale shell-model calculations. After presenting that the central and tensor forces produce distinctive ways of shell evolution, we show several recent results: (i) evolution of single-particle-like levels in antimony and cupper isotopes, (ii) shape coexistence in nickel isotopes understood in terms of configuration-dependent shell structure, and (iii) prediction of the evolution of the recently established $N=34$ magic number towards smaller proton numbers. In any case, large-scale shell-model calculations play indispensable roles in describing the interplay between single-particle character and correlation.
McKemmish, Laura K; Tennyson, Jonathan
2016-01-01
Accurate knowledge of the rovibronic near-infrared and visible spectra of vanadium monoxide (VO) is very important for studies of cool stellar and hot planetary atmospheres. Here, the required ab initio dipole moment and spin-orbit coupling curves for VO are produced. This data forms the basis of a new VO line list considering 13 different electronic states and containing over 277 million transitions. Open shell transition, metal diatomics are challenging species to model through ab initio quantum mechanics due to the large number of low-lying electronic states, significant spin-orbit coupling and strong static and dynamic electron correlation. Multi-reference configuration interaction methodologies using orbitals from a complete active space self-consistent-field (CASSCF) calculation are the standard technique for these systems. We use different state-specific or minimal-state CASSCF orbitals for each electronic state to maximise the calculation accuracy. The off-diagonal dipole moment controls the intensity...
Clustering of Parameter Sensitivities: Examples from a Helicopter Airframe Model Updating Exercise
Shahverdi, H.; C. Mares; W. Wang; J. E. Mottershead
2009-01-01
The need for high fidelity models in the aerospace industry has become ever more important as increasingly stringent requirements on noise and vibration levels, reliability, maintenance costs etc. come into effect. In this paper, the results of a finite element model updating exercise on a Westland Lynx XZ649 helicopter are presented. For large and complex structures, such as a helicopter airframe, the finite element model represents the main tool for obtaining accurate models which could pre...
A New Probability of Detection Model for Updating Crack Distribution of Offshore Structures
Institute of Scientific and Technical Information of China (English)
李典庆; 张圣坤; 唐文勇
2003-01-01
There exists model uncertainty of probability of detection for inspecting ship structures with nondestructive inspection techniques. Based on a comparison of several existing probability of detection (POD) models, a new probability of detection model is proposed for the updating of crack size distribution. Furthermore, the theoretical derivation shows that most existing probability of detection models are special cases of the new probability of detection model. The least square method is adopted for determining the values of parameters in the new POD model. This new model is also compared with other existing probability of detection models. The results indicate that the new probability of detection model can fit the inspection data better. This new probability of detection model is then applied to the analysis of the problem of crack size updating for offshore structures. The Bayesian updating method is used to analyze the effect of probability of detection models on the posterior distribution of a crack size. The results show that different probabilities of detection models generate different posterior distributions of a crack size for offshore structures.
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Finite element model updating of natural fibre reinforced composite structure in structural dynamics
Directory of Open Access Journals (Sweden)
Sani M.S.M.
2016-01-01
Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.
Comparison of Calculation Models for Bucket Foundation in Sand
DEFF Research Database (Denmark)
Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo
The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focused...... on resultant bearing capacity of variously embedded foundation in sand. The 2D models, [Ibsen 2001] and LimitState:GEO can be used for the preliminary design because they are fast and result in a rather similar bearing capacity calculation compared with the finite element models of Plaxis 3D. The 2D models...... and their results are compared to the finite element model in Plaxis 3D in this article....
Comparison of the performance of net radiation calculation models
DEFF Research Database (Denmark)
Kjærsgaard, Jeppe Hvelplund; Cuenca, R H; Martinez-Cob, A
2009-01-01
values of net radiation were calculated using three net outgoing long-wave radiation models and compared to measured values. Four meteorological datasets representing two climate regimes, a sub-humid, high-latitude environment and a semi-arid mid-latitude environment, were used to test the models...... meteorological input data is limited. Model predictions were found to have a higher bias and scatter when using summed calculated hourly time steps compared to using daily input data.......Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily...
New calculations in Dirac gaugino models: operators, expansions, and effects
Carpenter, Linda M.; Goodman, Jessica
2015-07-01
In this work we calculate important one loop SUSY-breaking parameters in models with Dirac gauginos, which are implied by the existence of heavy messenger fields. We find that these SUSY-breaking effects are all related by a small number of parameters, thus the general theory is tightly predictive. In order to make the most accurate analyses of one loop effects, we introduce calculations using an expansion in SUSY breaking messenger mass, rather than relying on postulating the forms of effective operators. We use this expansion to calculate one loop contributions to gaugino masses, non-holomorphic SM adjoint masses, new A-like and B-like terms, and linear terms. We also test the Higgs potential in such models, and calculate one loop contributions to the Higgs mass in certain limits of R-symmetric models, finding a very large contribution in many regions of the [InlineMediaObject not available: see fulltext.], where Higgs fields couple to standard model adjoint fields.
Ab initio calculations and modelling of atomic cluster structure
DEFF Research Database (Denmark)
Solov'yov, Ilia; Lyalin, Andrey G.; Greiner, Walter
2004-01-01
framework for modelling the fusion process of noble gas clusters is presented. We report the striking correspondence of the peaks in the experimentally measured abundance mass spectra with the peaks in the size-dependence of the second derivative of the binding energy per atom calculated for the chain...... of the noble gas clusters up to 150 atoms....
The role of hand calculations in ground water flow modeling.
Haitjema, Henk
2006-01-01
Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.
Wenmackers, Sylvia; Douven, Igor
2014-01-01
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delp...
Energy Technology Data Exchange (ETDEWEB)
Billet, L.; Moine, P. [Electricite de France (EDF), Direction des Etudes at Recherches, 1, Avenue du GENERAL-DE-GAULLE, BP 408, 92141 Clamart Cedex (France); Aubry, D. [Ecole Centrale de Paris, LMSSM, 92295 Chatenay Malabry Cedex (France)
1997-12-31
In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author). 12 refs.
Using radar altimetry to update a routing model of the Zambezi River Basin
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.; Bauer-Gottwein, Peter
2012-01-01
Satellite radar altimetry allows for the global monitoring of lakes and river levels. However, the widespread use of altimetry for hydrological studies is limited by the coarse temporal and spatial resolution provided by current altimetric missions and the fact that discharge rather than level...... is needed for hydrological applications. To overcome these limitations, altimetry river levels can be combined with hydrological modeling in a dataassimilation framework. This study focuses on the updating of a river routing model of the Zambezi using river levels from radar altimetry. A hydrological model...... of the basin was built to simulate the land phase of the water cycle and produce inflows to a Muskingum routing model. River altimetry from the ENVISAT mission was then used to update the storages in the reaches of the Muskingum model using the Extended Kalman Filter. The method showed improvements in modeled...
Microscopic Shell Model Calculations for the Fluorine Isotopes
Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Vary, James P.; Shirokov, Andrey M.
2015-10-01
Using a formalism based on the No Core Shell Model (NCSM), we have determined miscroscopically the core and single-particle energies and the effective two-body interactions that are the input to standard shell model (SSM) calculations. The basic idea is to perform a succession of a Okubo-Lee-Suzuki (OLS) transformation, a NCSM calculation, and a second OLS transformation to a further reduced space, such as the sd-shell, which allows the separation of the many-body matrix elements into an ``inert'' core part plus a few valence-nucleons calculation. In the present investigation we use this technique to calculate the properties of the nuclides in the Fluorine isotopic chain, using the JISP16 nucleon-nucleon interaction. The obtained SSM input, along with the results of the SSM calculations for the Fluorine isotopes, will be presented. This work supported in part by TUBITAK-BIDEB, the US DOE, the US NSF, NERSC, and the Russian Ministry of Education and Science.
A pose-based structural dynamic model updating method for serial modular robots
Mohamed, Richard Phillip; Xi, Fengfeng (Jeff); Chen, Tianyan
2017-02-01
A new approach is presented for updating the structural dynamic component models of serial modular robots using experimental data from component tests such that the updated model of the entire robot assembly can provide accurate results in any pose. To accomplish this, a test-analysis component mode synthesis (CMS) model with fixed-free component boundaries is implemented to directly compare measured frequency response functions (FRFs) from vibration experiments of individual modules. The experimental boundary conditions are made to emulate module connection interfaces and can enable individual joint and link modules to be tested in arbitrary poses. By doing so, changes in the joint dynamics can be observed and more FRF data points can be obtained from experiments to be used in the updating process. Because this process yields an overdetermined system of equations, a direct search method with nonlinear constraints on the resonances and antiresonances is used to update the FRFs of the analytical component models. The effectiveness of the method is demonstrated with experimental case studies on an adjustable modular linkage system. Overall, the method can enable virtual testing of modular robot systems without the need to perform further testing on entire assemblies.
Progressive collapse analysis using updated models for alternate path analysis after a blast
Eskew, Edward; Jang, Shinae; Bertolaccini, Kelly
2016-04-01
Progressive collapse is of rising importance within the structural engineering community due to several recent cases. The alternate path method is a design technique to determine the ability of a structure to sustain the loss of a critical element, or elements, and still resist progressive collapse. However, the alternate path method only considers the removal of the critical elements. In the event of a blast, significant damage may occur to nearby members not included in the alternate path design scenarios. To achieve an accurate assessment of the current condition of the structure after a blast or other extreme event, it may be necessary to reduce the strength or remove additional elements beyond the critical members designated in the alternate path design method. In this paper, a rapid model updating technique utilizing vibration measurements is used to update the structural model to represent the real-time condition of the structure after a blast occurs. Based upon the updated model, damaged elements will either have their strength reduced, or will be removed from the simulation. The alternate path analysis will then be performed, but only utilizing the updated structural model instead of numerous scenarios. After the analysis, the simulated response from the analysis will be compared to failure conditions to determine the buildings post-event condition. This method has the ability to incorporate damage to noncritical members into the analysis. This paper will utilize numerical simulations based upon a unified facilities criteria (UFC) example structure subjected to an equivalent blast to validate the methodology.
Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model
Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions,...
Spatial coincidence modelling, automated database updating and data consistency in vector GIS.
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter
Towards an integrated workflow for structural reservoir model updating and history matching
Leeuwenburgh, O.; Peters, E.; Wilschut, F.
2011-01-01
A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by
Towards an integrated workflow for structural reservoir model updating and history matching
Leeuwenburgh, O.; Peters, E.; Wilschut, F.
2011-01-01
A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by
Shell-model calculations of nuclei around mass 130
Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Odahara, A.
2015-09-01
Shell-model calculations are performed for even-even, odd-mass, and doubly-odd nuclei of Sn, Sb, Te, I, Xe, Cs, and Ba isotopes around mass 130 using the single-particle space made up of valence nucleons occupying the 0 g7 /2 ,1 d5 /2 ,2 s1 /2 ,0 h11 /2 , and 1 d3 /2 orbitals. The calculated energies and electromagnetic transitions are compared with the experimental data. In addition, several typical isomers in this region are investigated.
Particle-rotor-model calculations in 125I
Indian Academy of Sciences (India)
Hariprakash Sharma; B Sethi; P Banerjee; Ranjana Goswami; R K Bhandari; Jahan Singh
2001-07-01
Recent experimental data on 125I has revealed several interesting structural features. These include the observation of a three quasiparticle band, prolate and oblate deformed bands, signature inversion in the yrast positive-parity band and identiﬁcation of the unfavoured ℎ11/2 band showing very large signature splitting. In the present work, particle-rotor-model calculations have been performed for the ℎ11/2 band, using an axially symmetric deformed Nilsson potential. The calculations reproduce the experimental results well and predict a moderate prolate quadrupole deformation of about 0.2 for the band.
Carbon footprint calculation model for the Mexican food equivalent system
Directory of Open Access Journals (Sweden)
Salvador Ruiz Cerrillo
2017-06-01
Full Text Available Introduction: the impact environment trough the anthropogenic action has been contributed to the fast production of greenhouse gases effect (GHG, a way to estimate the quantity of these substances is the carbon footprint (CF, nowadays it does not exist enough models for the calculation of food carbon footprint. Objective: the aim of this study was to design a calculation model for the measurement of the carbon footprint on the Mexican food equivalent system. Methods: it was about a retrospective study, a bibliographic review was made with original and review articles in different specialized researchers, there were included publications in English and Spanish, also published from 2000 to 2016. Results: a reference table was proposed for the food carbon footprint calculation on the Mexican food equivalent system trough the carbon intensity indicator, which is determined by the grams of emissions equivalents of carbon dioxide (CO2 in relation with the energetic contribution of each food equivalent. Conclusion: in a conclusion manner, estimating food carbon footprint is still a challenge, mean while the calculation models proposal is important to estimate the production of GHG trough a more sustainable food system.
A model updating method for hybrid composite/aluminum bolted joints using modal test data
Adel, Farhad; Shokrollahi, Saeed; Jamal-Omidi, Majid; Ahmadian, Hamid
2017-05-01
The aim of this paper is to present a simple and applicable model for predicting the dynamic behavior of bolted joints in hybrid aluminum/composite structures and its model updating using modal test data. In this regards, after investigations on bolted joints in metallic structures which led to a new concept called joint affected region (JAR) published in Shokrollahi and Adel (2016), now, a doubly connective layer is established in order to simulate the bolted joint interfaces in hybrid structures. Using the proposed model, the natural frequencies of the hybrid bolted joint structure are computed and compared to the modal test results in order to evaluate and verify the new model predictions. Because of differences in the results of two approaches, the finite element (FE) model is updated based on the genetic algorithm (GA) by minimizing the differences between analytical model and test results. This is done by identifying the parameters at the JAR including isotropic Young's modulus in metallic substructure and that of anisotropic composite substructure. The updated model compared to the initial model simulates experimental results more properly. Therefore, the proposed model can be used for modal analysis of the hybrid joint interfaces in complex and large structures.
Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
Updating known distribution models for forecasting climate change impact on endangered species.
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.
Updating parameters of the chicken processing line model
DEFF Research Database (Denmark)
Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna
2010-01-01
A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... of the chicken processing line model....
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Updated Results for the Wake Vortex Inverse Model
Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.
2008-01-01
NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2016-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
DEFF Research Database (Denmark)
Luczak, Marcin; Manzato, Simone; Peeters, Bart;
2014-01-01
of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement...... is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations...... were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set...
Institute of Scientific and Technical Information of China (English)
GUO Qintao; ZHANG Lingmi; TAO Zheng
2008-01-01
Thin wall component is utilized to absorb impact energy of a structure. However, the dynamic behavior of such thin-walled structure is highly non-linear with material, geometry and boundary non-linearity. A model updating and validation procedure is proposed to build accurate finite element model of a frame structure with a non-linear thin-walled component for dynamic analysis. Design of experiments (DOE) and principal component decomposition (PCD) approach are applied to extract dynamic feature from nonlinear impact response for correlation of impact test result and FE model of the non-linear structure. A strain-rate-dependent non-linear model updating method is then developed to build accurate FE model of the structure. Computer simulation and a real frame structure with a highly non-linear thin-walled component are employed to demonstrate the feasibility and effectiveness of the proposed approach.
Calculation Model and Simulation of Warship Damage Probability
Institute of Scientific and Technical Information of China (English)
TENG Zhao-xin; ZHANG Xu; YANG Shi-xing; ZHU Xiao-ping
2008-01-01
The combat efficiency of mine obstacle is the focus of the present research. Based on the main effects that mine obstacle has on the target warship damage probability such as: features of mines with maneuverability, the success rate of mine-laying, the hit probability, mine reliability and action probability, a calculation model of target warship mine-encounter probability is put forward under the condition that the route selection of target warships accords with even distribution and the course of target warships accords with normal distribution. And a damage probability model of mines with maneuverability to target warships is set up, a simulation way proved the model to be a high practicality.
Goorley, J T; Kiger, W S; Zamenhof, R G
2002-02-01
As clinical trials of Neutron Capture Therapy (NCT) are initiated in the U.S. and other countries, new treatment planning codes are being developed to calculate detailed dose distributions in patient-specific models. The thorough evaluation and comparison of treatment planning codes is a critical step toward the eventual standardization of dosimetry, which, in turn, is an essential element for the rational comparison of clinical results from different institutions. In this paper we report development of a reference suite of computational test problems for NCT dosimetry and discuss common issues encountered in these calculations to facilitate quantitative evaluations and comparisons of NCT treatment planning codes. Specifically, detailed depth-kerma rate curves were calculated using the Monte Carlo radiation transport code MCNP4B for four different representations of the modified Snyder head phantom, an analytic, multishell, ellipsoidal model, and voxel representations of this model with cubic voxel sizes of 16, 8, and 4 mm. Monoenergetic and monodirectional beams of 0.0253 eV, 1, 2, 10, 100, and 1000 keV neutrons, and 0.2, 0.5, 1, 2, 5, and 10 MeV photons were individually simulated to calculate kerma rates to a statistical uncertainty of neutron beam with a broad neutron spectrum, similar to epithermal beams currently used or proposed for NCT clinical trials, was computed for all models. The thermal neutron, fast neutron, and photon kerma rates calculated with the 4 and 8 mm voxel models were within 2% and 4%, respectively, of those calculated for the analytical model. The 16 mm voxel model produced unacceptably large discrepancies for all dose components. The effects from different kerma data sets and tissue compositions were evaluated. Updating the kerma data from ICRU 46 to ICRU 63 data produced less than 2% difference in kerma rate profiles. The depth-dose profile data, Monte Carlo code input, kerma factors, and model construction files are available
van Wessem, J.M.; Reijmer, C.H.; Lenaerts, J.T.M.; van de Berg, W.J.; van den Broeke, M.R.; van Meijgaard, E.
2014-01-01
In this study the effects of changes in the physics package of the regional atmospheric climate model RACMO2 on the modelled surface energy balance, nearsurface temperature and wind speed of Antarctica are presented. The physics package update primarily consists of an improved turbulent and radiativ
Atmospheric neutrino flux calculation using the NRLMSISE00 atmospheric model
Honda, M; Kajita, T; Kasahara, K; Midorikawa, S
2015-01-01
In this paper, we extend the calculation of the atmospheric neutrino flux~\\cite{hkkm2004,hkkms2006,hkkm2011} to the sites in polar and tropical regions. In our earliest full 3D-calculation~\\cite{hkkm2004}, we used DPMJET-III~\\cite{dpm} for the hadronic interaction model above 5~GeV, and NUCRIN~\\cite{nucrin} below 5~GeV. We modified DPMJET-III as in Ref.~\\cite{hkkms2006} to reproduce the experimental muon spectra better, mainly using the data observed by BESS group~\\cite{BESSTeVpHemu}. In a recent work~\\cite{hkkm2011}, we introduced JAM interaction model for the low energy hadronic interactions. JAM is a nuclear interaction model developed with PHITS (Particle and Heavy-Ion Transport code System)~\\cite{phits}. In Ref.~\\cite{hkkm2011}, we could reproduce the observed muon flux at the low energies at balloon altitude with DPMJET-III above 32 GeV and JAM below that better than the combination of DPMJET-III above 5~GeV and NUCRIN below that. Besides the interaction model, we have also improved the calculation sche...
Model update and variability assessment for automotive crash simulations
Sun, J.; He, J.; Vlahopoulos, N.; Ast, P. van
2007-01-01
In order to develop confidence in numerical models which are used for automotive crash simulations, results are often compared with test data, and in some cases the numerical models are adjusted in order to improve the correlation. Comparisons between the time history of acceleration responses from
Varying facets of a model of competitive learning: the role of updates and memory
Bhat, Ajaz Ahmad
2011-01-01
The effects of memory and different updating paradigms in a game-theoretic model of competitive learning, comprising two distinct agent types, are analysed. For nearly all the updating schemes, the phase diagram of the model consists of a disordered phase separating two ordered phases at coexistence: the critical exponents of these transitions belong to the generalised universality class of the voter model. Also, as appropriate for a model of competing strategies, we examine the situation when the two types have different characteristics, i.e. their parameters are chosen to be away from coexistence. We find linear response behaviour in the expected regimes but, more interestingly, are able to probe the effect of memory. This suggests that even the less successful agent types can win over the more successful ones, provided they have better retentive powers.
An update on land-ice modeling in the CESM
Energy Technology Data Exchange (ETDEWEB)
Lipscomb, William H [Los Alamos National Laboratory
2011-01-18
Mass loss from land ice, including the Greenland and Antarctic ice sheets as well as smaller glacier and ice caps, is making a large and growing contribution to global sea-level rise. Land ice is only beginning to be incorporated in climate models. The goal of the Land Ice Working Group (LIWG) is to develop improved land-ice models and incorporate them in CESM, in order to provide useful, physically-based sea-level predictions. LJWG efforts to date have led to the inclusion of a dynamic ice-sheet model (the Glimmer Community Ice Sheet Model, or Glimmer-CISM) in the Community Earth System Model (CESM), which was released in June 2010. CESM also includes a new surface-mass-balance scheme for ice sheets in the Community Land Model. Initial modeling efforts are focused on the Greenland ice sheet. Preliminary results are promising. In particular, the simulated surface mass balance for Greenland is in good agreement with observations and regional model results. The current model, however, has significant limitations: The land-ice coupling is one-way; we are using a serial version of Glimmer-CISM with the shallow-ice approximation; and there is no ice-ocean coupling. During the next year we plan to implement two-way coupling (including ice-ocean coupling with a dynamic Antarctic ice sheet) with a parallel , higher-order version of Glimmer-CISM. We will also add parameterizations of small glaciers and ice caps. With these model improvements, CESM will be able to simulate all the major contributors to 21st century global sea-level rise. Results of the first round of simulations should be available in time to be included in the Fifth Assessment Report (ARS) of the Intergovernmental Panel on Climate Change.
Synthetic vision and emotion calculation in intelligent virtual human modeling.
Zhao, Y; Kang, J; Wright, D K
2007-01-01
The virtual human technique can already provide vivid and believable human behaviour in more and more scenarios. Virtual humans are expected to replace real humans in hazardous situations to undertake tests and feed back valuable information. This paper will introduce a virtual human with a novel collision-based synthetic vision, short-term memory model and a capability to implement emotion calculation and decision making. The virtual character based on this model can 'see' what is in its field of view (FOV) and remember those objects. After that, a group of affective computing equations have been introduced. These equations have been implemented into a proposed emotion calculation process to enlighten emotion for virtual intelligent humans.
Dynamic causal modelling of electrographic seizure activity using Bayesian belief updating.
Cooray, Gerald K; Sengupta, Biswa; Douglas, Pamela K; Friston, Karl
2016-01-15
Seizure activity in EEG recordings can persist for hours with seizure dynamics changing rapidly over time and space. To characterise the spatiotemporal evolution of seizure activity, large data sets often need to be analysed. Dynamic causal modelling (DCM) can be used to estimate the synaptic drivers of cortical dynamics during a seizure; however, the requisite (Bayesian) inversion procedure is computationally expensive. In this note, we describe a straightforward procedure, within the DCM framework, that provides efficient inversion of seizure activity measured with non-invasive and invasive physiological recordings; namely, EEG/ECoG. We describe the theoretical background behind a Bayesian belief updating scheme for DCM. The scheme is tested on simulated and empirical seizure activity (recorded both invasively and non-invasively) and compared with standard Bayesian inversion. We show that the Bayesian belief updating scheme provides similar estimates of time-varying synaptic parameters, compared to standard schemes, indicating no significant qualitative change in accuracy. The difference in variance explained was small (less than 5%). The updating method was substantially more efficient, taking approximately 5-10min compared to approximately 1-2h. Moreover, the setup of the model under the updating scheme allows for a clear specification of how neuronal variables fluctuate over separable timescales. This method now allows us to investigate the effect of fast (neuronal) activity on slow fluctuations in (synaptic) parameters, paving a way forward to understand how seizure activity is generated.
Interval model updating using perturbation method and Radial Basis Function neural networks
Deng, Zhongmin; Guo, Zhaopu; Zhang, Xinjie
2017-02-01
In recent years, stochastic model updating techniques have been applied to the quantification of uncertainties inherently existing in real-world engineering structures. However in engineering practice, probability density functions of structural parameters are often unavailable due to insufficient information of a structural system. In this circumstance, interval analysis shows a significant advantage of handling uncertain problems since only the upper and lower bounds of inputs and outputs are defined. To this end, a new method for interval identification of structural parameters is proposed using the first-order perturbation method and Radial Basis Function (RBF) neural networks. By the perturbation method, each random variable is denoted as a perturbation around the mean value of the interval of each parameter and that those terms can be used in a two-step deterministic updating sense. Interval model updating equations are then developed on the basis of the perturbation technique. The two-step method is used for updating the mean values of the structural parameters and subsequently estimating the interval radii. The experimental and numerical case studies are given to illustrate and verify the proposed method in the interval identification of structural parameters.
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
Black Hole Entropy Calculation in a Modified Thin Film Model
Indian Academy of Sciences (India)
Jingyi Zhang
2011-03-01
The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the area, then the emission energy of the particles will satisfy = /360.
Study on calculation model of road lighting visibility
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Visibility is an evaluation index for road lighting, which comprehensively influences the vision reliability of drivers and is a key factor for road lighting safety and energy saving. This paper introduces the concept of road lighting visibility and its influencing factors. It also explains the small target visibility calculation model for road lighting design, and describes the significance of establishing urban road lighting visibility standards from a point of view of visual function and visual comfort of drivers.
IBAR: Interacting boson model calculations for large system sizes
Casperson, R. J.
2012-04-01
Scaling the system size of the interacting boson model-1 (IBM-1) into the realm of hundreds of bosons has many interesting applications in the field of nuclear structure, most notably quantum phase transitions in nuclei. We introduce IBAR, a new software package for calculating the eigenvalues and eigenvectors of the IBM-1 Hamiltonian, for large numbers of bosons. Energies and wavefunctions of the nuclear states, as well as transition strengths between them, are calculated using these values. Numerical errors in the recursive calculation of reduced matrix elements of the d-boson creation operator are reduced by using an arbitrary precision mathematical library. This software has been tested for up to 1000 bosons using comparisons to analytic expressions. Comparisons have also been made to the code PHINT for smaller system sizes. Catalogue identifier: AELI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 28 734 No. of bytes in distributed program, including test data, etc.: 4 104 467 Distribution format: tar.gz Programming language: C++ Computer: Any computer system with a C++ compiler Operating system: Tested under Linux RAM: 150 MB for 1000 boson calculations with angular momenta of up to L=4 Classification: 17.18, 17.20 External routines: ARPACK (http://www.caam.rice.edu/software/ARPACK/) Nature of problem: Construction and diagonalization of large Hamiltonian matrices, using reduced matrix elements of the d-boson creation operator. Solution method: Reduced matrix elements of the d-boson creation operator have been stored in data files at machine precision, after being recursively calculated with higher than machine precision. The Hamiltonian matrix is calculated and diagonalized, and the requested transition strengths are calculated
iTree-Hydro: Snow hydrology update for the urban forest hydrology model
Yang Yang; Theodore A. Endreny; David J. Nowak
2011-01-01
This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest EffectsâHydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...
Dental caries: an updated medical model of risk assessment.
Kutsch, V Kim
2014-04-01
Dental caries is a transmissible, complex biofilm disease that creates prolonged periods of low pH in the mouth, resulting in a net mineral loss from the teeth. Historically, the disease model for dental caries consisted of mutans streptococci and Lactobacillus species, and the dental profession focused on restoring the lesions/damage from the disease by using a surgical model. The current recommendation is to implement a risk-assessment-based medical model called CAMBRA (caries management by risk assessment) to diagnose and treat dental caries. Unfortunately, many of the suggestions of CAMBRA have been overly complicated and confusing for clinicians. The risk of caries, however, is usually related to just a few common factors, and these factors result in common patterns of disease. This article examines the biofilm model of dental caries, identifies the common disease patterns, and discusses their targeted therapeutic strategies to make CAMBRA more easily adaptable for the privately practicing professional.
Model Updating and Uncertainty Management for Aircraft Prognostic Systems Project
National Aeronautics and Space Administration — This proposal addresses the integration of physics-based damage propagation models with diagnostic measures of current state of health in a mathematically rigorous...
Updated measurements from CREAM & CREDO & implications for environment & shielding models.
Dyer, C S; Truscott, P R; Peerless, C L; Watson, C J; Evans, H E; Knight, P; Cosby, M; Underwood, C; Cousins, T; Noulty, R
1998-06-01
Flight data obtained between 1995 and 1997 from the Cosmic Radiation Environment Monitors CREAM & CREDO carried on UoSat-3, Space Shuttle, STRV-1a (Space Technology Research Vehicle) and APEX (Advanced Photovoltaic and Electronics Experiment Spacecraft) have been added to the dataset affording coverage since 1990. The modulation of cosmic rays and evolution of the South Atlantic Anomaly are observed, the former comprising a factor three increase at high latitudes and the latter a general increase accompanied by a westward drift. Comparison of particle fluxes and linear energy transfer spectra is made with improved environment & radiation transport calculations which account for shield distributions and secondary particles. While there is an encouraging convergence between predictions and observations, significant improvements are still required, particularly in the treatrnent of locally produced secondary particles.
U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...
Robinson, Orin J.; McGowan, Conor; Devers, Patrick K.
2017-01-01
Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
Clustering of Parameter Sensitivities: Examples from a Helicopter Airframe Model Updating Exercise
Directory of Open Access Journals (Sweden)
H. Shahverdi
2009-01-01
Full Text Available The need for high fidelity models in the aerospace industry has become ever more important as increasingly stringent requirements on noise and vibration levels, reliability, maintenance costs etc. come into effect. In this paper, the results of a finite element model updating exercise on a Westland Lynx XZ649 helicopter are presented. For large and complex structures, such as a helicopter airframe, the finite element model represents the main tool for obtaining accurate models which could predict the sensitivities of responses to structural changes and optimisation of the vibration levels. In this study, the eigenvalue sensitivities with respect to Young's modulus and mass density are used in a detailed parameterisation of the structure. A new methodology is developed using an unsupervised learning technique based on similarity clustering of the columns of the sensitivity matrix. An assessment of model updating strategies is given and comparative results for the correction of vibration modes are discussed in detail. The role of the clustering technique in updating large-scale models is emphasised.
A review of Higgs mass calculations in supersymmetric models
DEFF Research Database (Denmark)
Draper, P.; Rzehak, H.
2016-01-01
related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...... in the Minimal Supersymmetric Standard Model, in particular the large radiative corrections required to lift mh to 125 GeV and their calculation via Feynman-diagrammatic and effective field theory techniques. This review is intended as an entry point for readers new to the field, and as a summary of the current...
Calculation of precise firing statistics in a neural network model
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Calculation of statistical entropic measures in a model of solids
Sanudo, Jaime
2012-01-01
In this work, a one-dimensional model of crystalline solids based on the Dirac comb limit of the Kronig-Penney model is considered. From the wave functions of the valence electrons, we calculate a statistical measure of complexity and the Fisher-Shannon information for the lower energy electronic bands appearing in the system. All these magnitudes present an extremal value for the case of solids having half-filled bands, a configuration where in general a high conductivity is attained in real solids, such as it happens with the monovalent metals.
Bryans, P; Savin, D W
2008-01-01
We have reanalyzed SUMER observations of a parcel of coronal gas using new collisional ionization equilibrium (CIE) calculations. These improved CIE fractional abundances were calculated using state-of-the-art electron-ion recombination data for K-shell, L-shell, Na-like, and Mg-like ions of all elements from H through Zn and, additionally, Al- through Ar-like ions of Fe. Improved CIE calculations based on these data are presented here. We have also developed a new systematic method for determining the average emission measure (EM) and electron temperature (T_e) of an emitting plasma. With our new CIE data and our new approach for determining the average EM and T_e we have reanalyzed SUMER observations of the solar corona. We have compared our results with those of previous studies and found some significant differences for the derived EM and T_e. We have also calculated the enhancement of coronal elemental abundances compared to their photospheric abundances, using the SUMER observations themselves to determ...
Modeling Considerations for Ingestion Pathway Dose Calculations Using CAP88.
Stuenkel, David
2017-04-01
The CAP88-PC computer model was developed by the U.S. Environmental Protection Agency to demonstrate compliance under the National Emission Standards for Hazardous Air Pollutants (NESHAPS). The program combines atmospheric transport models with the terrestrial food chain models in the U.S. Nuclear Regulatory Commission Regulatory Guide 1.109 to compute the radionuclide concentrations in the air, on ground surfaces and plants, and the concentrations in food to estimate the dose to individuals living in the area around a facility emitting radionuclides into the atmosphere. CAP88 allows the user to select the size of the assessment area and the receptor locations used to calculate the radionuclide concentrations in non-leafy vegetables, leafy vegetables, milk, and meat consumed by the receptors. Depending on the food scenario selected and the type of calculation ("Population" or "Individual") chosen, the annual effective dose from ingestion can depend on both the size of the assessment area and the location of the receptors. Illustrative examples demonstrate the effect of the choice of these input parameters on the annual effective dose from ingestion. An understanding of the model used in CAP88 and the differences between "Population" and "Individual" run types will enable the CAP88 user to better model the ingestion dose.
Status Update: Modeling Energy Balance in NIF Hohlraums
Energy Technology Data Exchange (ETDEWEB)
Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-07-22
We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.
Status Update: Modeling Energy Balance in NIF Hohlraums
Energy Technology Data Exchange (ETDEWEB)
Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-07-22
We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.
Energy Technology Data Exchange (ETDEWEB)
McCright, R D
1998-06-30
This Engineered Materials Characterization Report (EMCR), Volume 3, discusses in considerable detail the work of the past 18 months on testing the candidate materials proposed for the waste-package (WP) container and on modeling the performance of those materials in the Yucca Mountain (YM) repository setting This report was prepared as an update of information and serves as one of the supporting documents to the Viability Assessment (VA) of the Yucca Mountain Project. Previous versions of the EMCR have provided a history and background of container-materials selection and evaluation (Volume I), a compilation of physical and mechanical properties for the WP design effort (Volume 2), and corrosion-test data and performance-modeling activities (Volume 3). Because the information in Volumes 1 and 2 is still largely current, those volumes are not being revised. As new information becomes available in the testing and modeling efforts, Volume 3 is periodically updated to include that information.
Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin
DEFF Research Database (Denmark)
Finsen, F.; Milzow, Christian; Smith, R.
2014-01-01
Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... are converted to discharge using rating curves of simulated discharge versus observed altimetry. This approach makes it possible to use altimetry data from river cross sections where both in-situ rating curves and accurate river cross section geometry are not available. Model updating based on radar altimetry...
An updated natural history model of cervical cancer: derivation of model parameters.
Campos, Nicole G; Burger, Emily A; Sy, Stephen; Sharma, Monisha; Schiffman, Mark; Rodriguez, Ana Cecilia; Hildesheim, Allan; Herrero, Rolando; Kim, Jane J
2014-09-01
Mathematical models of cervical cancer have been widely used to evaluate the comparative effectiveness and cost-effectiveness of preventive strategies. Major advances in the understanding of cervical carcinogenesis motivate the creation of a new disease paradigm in such models. To keep pace with the most recent evidence, we updated a previously developed microsimulation model of human papillomavirus (HPV) infection and cervical cancer to reflect 1) a shift towards health states based on HPV rather than poorly reproducible histological diagnoses and 2) HPV clearance and progression to precancer as a function of infection duration and genotype, as derived from the control arm of the Costa Rica Vaccine Trial (2004-2010). The model was calibrated leveraging empirical data from the New Mexico Surveillance, Epidemiology, and End Results Registry (1980-1999) and a state-of-the-art cervical cancer screening registry in New Mexico (2007-2009). The calibrated model had good correspondence with data on genotype- and age-specific HPV prevalence, genotype frequency in precancer and cancer, and age-specific cancer incidence. We present this model in response to a call for new natural history models of cervical cancer intended for decision analysis and economic evaluation at a time when global cervical cancer prevention policy continues to evolve and evidence of the long-term health effects of cervical interventions remains critical. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
An updated summary of MATHEW/ADPIC model evaluation studies
Energy Technology Data Exchange (ETDEWEB)
Foster, K.T.; Dickerson, M.H.
1990-05-01
This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.
General equilibrium basic needs policy model, (updating part).
Kouwenaar A
1985-01-01
ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.
Institute of Scientific and Technical Information of China (English)
Ping Wang; Chaohe Yang; Xuemin Tian; Dexian Huang
2014-01-01
The performance of data-driven models relies heavily on the amount and quality of training samples, so it might deteriorate significantly in the regions where samples are scarce. The objective of this paper is to develop an on-line SVR model updating strategy to track the change in the process characteristics efficiently with affordable computational burden. This is achieved by adding a new sample that violates the Karush-Kuhn-Tucker condi-tions of the existing SVR model and by deleting the old sample that has the maximum distance with respect to the newly added sample in feature space. The benefits offered by such an updating strategy are exploited to develop an adaptive model-based control scheme, where model updating and control task perform alternately. The effectiveness of the adaptive controller is demonstrated by simulation study on a continuous stirred tank reactor. The results reveal that the adaptive MPC scheme outperforms its non-adaptive counterpart for large-magnitude set point changes and variations in process parameters.
Employing incomplete complex modes for model updating and damage detection of damped structures
Institute of Scientific and Technical Information of China (English)
LI HuaJun; LIU FuShun; HU Sau-Lon James
2008-01-01
In the study of finite element model updating or damage detection, most papers are devoted to undamped systems. Thus, their objective has been exclusively re-stricted to the correction of the mass and stiffness matrices. In contrast, this paper performs the model updating and damage detection for damped structures. A theoretical contribution of this paper is to extend the cross-model cross-mode (CMCM) method to simultaneously update the mass, damping and stiffness matri-ces of a finite element model when only few spatially incomplete, complex-valued modes are available. Numerical studies are conducted for a 30-DOF (degree-of-freedom) cantilever beam with multiple damaged elements, as the measured modes are synthesized from finite element models. The numerical results reveal that ap-plying the CMCM method, together with an iterative Guyan reduction scheme, can yield good damage detection in general. When the measured modes utilized in the CMCM method are corrupted with irregular errors, assessing damage at the loca-tion that possesses larger modal strain energy is less sensitive to the corrupted modes.
Employing incomplete complex modes for model updating and damage detection of damped structures
Institute of Scientific and Technical Information of China (English)
HU; Sau-Lon; James
2008-01-01
In the study of finite element model updating or damage detection,most papers are devoted to undamped systems.Thus,their objective has been exclusively restricted to the correction of the mass and stiffness matrices.In contrast,this paper performs the model updating and damage detection for damped structures.A theoretical contribution of this paper is to extend the cross-model cross-mode(CMCM) method to simultaneously update the mass,damping and stiffness matrices of a finite element model when only few spatially incomplete,complex-valued modes are available.Numerical studies are conducted for a 30-DOF(degree-of-freedom) cantilever beam with multiple damaged elements,as the measured modes are synthesized from finite element models.The numerical results reveal that applying the CMCM method,together with an iterative Guyan reduction scheme,can yield good damage detection in general.When the measured modes utilized in the CMCM method are corrupted with irregular errors,assessing damage at the location that possesses larger modal strain energy is less sensitive to the corrupted modes.
Revisiting the Carrington Event: Updated modeling of atmospheric effects
Thomas, Brian C; Snyder, Brock R
2011-01-01
The terrestrial effects of major solar events such as the Carrington white-light flare and subsequent geomagnetic storm of August-September 1859 are of considerable interest, especially in light of recent predictions that such extreme events will be more likely over the coming decades. Here we present results of modeling the atmospheric effects, especially production of odd nitrogen compounds and subsequent depletion of ozone, by solar protons associated with the Carrington event. This study combines approaches from two previous studies of the atmospheric effect of this event. We investigate changes in NOy compounds as well as depletion of O3 using a two-dimensional atmospheric chemistry and dynamics model. Atmospheric ionization is computed using a range-energy relation with four different proxy proton spectra associated with more recent well-known solar proton events. We find that changes in atmospheric constituents are in reasonable agreement with previous studies, but effects of the four proxy spectra use...
Energy Technology Data Exchange (ETDEWEB)
Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that
Nuclear model calculations on cyclotron production of {sup 51}Cr
Energy Technology Data Exchange (ETDEWEB)
Kakavand, Tayeb [Imam Khomeini International Univ., Qazvin (Iran, Islamic Republic of). Dept. of Physics; Aboudzadeh, Mohammadreza [Nuclear Science and Technology Research Institute/AEOI, Karaj (Iran, Islamic Republic of). Agricultural, Medical and Industrial Research School; Farahani, Zahra; Eslami, Mohammad [Zanjan Univ. (Iran, Islamic Republic of). Dept. of Physics
2015-12-15
{sup 51}Cr (T{sub 1/2} = 27.7 d), which decays via electron capture (100 %) with 320 keV gamma emission (9.8 %), is a radionuclide with still a large application in biological studies. In this work, ALICE/ASH and TALYS nuclear model codes along with some adjustments are used to calculate the excitation functions for proton, deuteron, α-particle and neutron induced on various targets leading to the production of {sup 51}Cr radioisotope. The production yields of {sup 51}Cr from various reactions are determined using the excitation function calculations and stopping power data. The results are compared with corresponding experimental data and discussed from point of view of feasibility.
A simplified analytical random walk model for proton dose calculation
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple timearea model and historic rain series...... that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60% and 100%, respectively, independent of the catchments time of concentration....
Directory of Open Access Journals (Sweden)
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA
Energy Technology Data Exchange (ETDEWEB)
Davis, S.C.
2000-11-16
The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
2012-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core
EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION
Directory of Open Access Journals (Sweden)
André Carlos Silva
2012-12-01
Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.
The Chandra X-Ray Observatory Radiation Environmental Model Update
Blackwell, William C.; Minow, Joseph I.; ODell, Stephen L.; Cameron, Robert A.; Virani, Shanil N.
2003-01-01
CRMFLX (Chandra Radiation Model of ion FLUX) is a radiation environment risk mitigation tool for use as a decision aid in planning the operation times for Chandra's Advanced CCD Imaging Spectrometer (ACIS) detector. The accurate prediction of the proton flux environment with energies of 100 - 200 keV is needed in order to protect the ACIS detector against proton degradation. Unfortunately, protons of this energy are abundant in the region of space where Chandra must operate. In addition, on-board particle detectors do not measure proton flux levels of the required energy range. CRMFLX is an engineering environment model developed to predict the proton flux in the solar wind, magnetosheath, and magnetosphere phenomenological regions of geospace. This paper describes the upgrades to the ion flux databases for the magnetosphere, magnetosheath, and solar wind regions. These data files were created by using Geotail and Polar spacecraft flux measurements only when the Advanced Composition Explorer (ACE) spacecraft's 0.14 MeV particle flux was below a threshold value. This new database allows for CRMFLX output to be correlated with both the geomagnetic activity level, as represented by the Kp index, as well as with solar proton events. Also, reported in this paper are results of analysis leading to a change in Chandra operations that successfully mitigates the false trigger rate for autonomous radiation events caused by relativistic electron flux contamination of proton channels.
Updates on measurements and modeling techniques for expendable countermeasures
Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.
2016-10-01
The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.
Energy Technology Data Exchange (ETDEWEB)
Te Buck, S.; Van Keulen, B.; Bosselaar, L.; Gerlagh, T.; Skelton, T.
2010-07-15
This is the fifth, updated edition of the Dutch Renewable Energy Monitoring Protocol. The protocol, compiled on behalf of the Ministry of Economic Affairs, can be considered as a policy document that provides a uniform calculation method for determining the amount of energy produced in the Netherlands in a renewable manner. Because all governments and organisations use the calculation methods described in this protocol, this makes it possible to monitor developments in this field well and consistently. The introduction of this protocol outlines the history and describes its set-up, validity and relationship with other similar documents and agreements. The Dutch Renewable Energy Monitoring Protocol is compiled by NL Agency, and all relevant parties were given the chance to provide input. This has been incorporated as far as is possible. Statistics Netherlands (CBS) uses this protocol to calculate the amount of renewable energy produced in the Netherlands. These data are then used by the Ministry of Economic Affairs to gauge the realisation of policy objectives. In June 2009 the European Directive for energy from renewable sources was published with renewable energy targets for the Netherlands. This directive used a different calculation method - the gross energy end-use method - whilst the Dutch definition is based on the so-called substitution method. NL Agency was asked to add the calculation according to the gross end use method, although this is not clearly defined on a number of points. In describing the method, the unanswered questions become clear, as do, for example, the points the Netherlands should bring up in international discussions.
Directory of Open Access Journals (Sweden)
Sarah E Murray
2014-03-01
Full Text Available This paper discusses three potential varieties of update: updates to the common ground, structuring updates, and updates that introduce discourse referents. These different types of update are used to model different aspects of natural language phenomena. Not-at-issue information directly updates the common ground. The illocutionary mood of a sentence structures the context. Other updates introduce discourse referents of various types, including propositional discourse referents for at-issue information. Distinguishing these types of update allows a unified treatment of a broad range of phenomena, including the grammatical evidentials found in Cheyenne (Algonquian as well as English evidential parentheticals, appositives, and mood marking. An update semantics that can formalize all of these varieties of update is given, integrating the different kinds of semantic contributions into a single representation of meaning. http://dx.doi.org/10.3765/sp.7.2 BibTeX info
SEXIE 3.0 — an updated computer program for the calculation of coordination shells and geometries
Tabor-Morris, Anne E.; Rupp, Bernhard
1994-08-01
We report a new version of our FORTRAN program SEXIE (ACBV). New features permit interfacing to related programs for EXAFS calculations (FEFF by J.J. Rehr et al.) and structure visualization (SCHAKAL by E. Keller). The code has been refined and the basis transformation matrix from fractional to cartesian coordinates has been corrected and made compatible with IUCr (International Union for Crystallography) standards. We discuss how to determine the correct space group setting and atom position input. New examples for Unix script files are provided.
Box models for the evolution of atmospheric oxygen: an update
Kasting, J. F.
1991-01-01
A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.
Box models for the evolution of atmospheric oxygen: an update
Kasting, J. F.
1991-01-01
A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.
An Updated GA Signaling 'Relief of Repression' Regulatory Model
Institute of Scientific and Technical Information of China (English)
Xiu-Hua Gao; Sen-Lin Xiao; Qin-Fang Yao; Yu-Juan Wang; Xiang-Dong Fu
2011-01-01
Gibberellic acid (GA)regulates many aspects of plant growth and development. The DELLA proteins act to restrain plant growth, and GA relieves this repression by promoting their degradation via the 26S proteasome pathway.The elucidation of the crystalline structure of the GA soluble receptor GID1 protein represents an important breakthrough for understanding the way in which GA is perceived and how it induces the destabilization of the DELLA proteins. Recent advances have revealed that the DELLA proteins are involved in protein-protein interactions within various environmental and hormone signaling pathways. In this review, we highlight our current understanding of the 'relief of repression" model that aims to explain the role of GA and the function of the DELLA proteins, incorporating the many aspects of cross-talk shown to exist in the control of plant development and the response to stress.
Mixed-Symmetry Shell-Model Calculations in Nuclear Physics
Gueorguiev, V G
2010-01-01
We consider a novel approach to the nuclear shell model. The one-dimensional harmonic oscillator in a box is used to introduce the concept of an oblique-basis shell-model theory. By implementing the Lanczos method for diagonalization of large matrices, and the Cholesky algorithm for solving generalized eigenvalue problems, the method is applied to nuclei. The mixed-symmetry basis combines traditional spherical shell-model states with SU(3) collective configurations. We test the validity of this mixed-symmetry scheme on 24Mg and 44Ti. Results for 24Mg, obtained using the Wilthental USD intersection in a space that spans less than 10% of the full-space, reproduce the binding energy within 2% as well as an accurate reproduction of the low-energy spectrum and the structure of the states - 90% overlap with the exact eigenstates. In contrast, for an m-scheme calculation, one needs about 60% of the full space to obtain compatible results. Calculations for 44Ti support the mixed-mode scheme although the pure SU(3) ca...
Ding, Zhong-Jun; Jiang, Rui; Gao, Zi-You; Wang, Bing-Hong; Long, Jiancheng
2013-08-01
The effect of overpasses in the Biham-Middleton-Levine traffic flow model with random and parallel update rules has been studied. An overpass is a site that can be occupied simultaneously by an eastbound car and a northbound one. Under periodic boundary conditions, both self-organized and random patterns are observed in the free-flowing phase of the parallel update model, while only the random pattern is observed in the random update model. We have developed mean-field analysis for the moving phase of the random update model, which agrees with the simulation results well. An intermediate phase is observed in which some cars could pass through the jamming cluster due to the existence of free paths in the random update model. Two intermediate states are observed in the parallel update model, which have been ignored in previous studies. The intermediate phases in which the jamming skeleton is only oriented along the diagonal line in both models have been analyzed, with the analyses agreeing well with the simulation results. With the increase of overpass ratio, the jamming phase and the intermediate phases disappear in succession for both models. Under open boundary conditions, the system exhibits only two phases when the ratio of overpasses is below a threshold in the random update model. When the ratio of the overpass is close to 1, three phases could be observed, similar to the totally asymmetric simple exclusion process model. The dependence of the average velocity, the density, and the flow rate on the injection probability in the moving phase has also been obtained through mean-field analysis. The results of the parallel model under open boundary conditions are similar to that of the random update model.
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
Energy Technology Data Exchange (ETDEWEB)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
1981-11-01
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
MODEL OF FEES CALCULATION FOR ACCESS TO TRACK INFRASTRUCTURE FACILITIES
Directory of Open Access Journals (Sweden)
M. I. Mishchenko
2014-12-01
Full Text Available Purpose. The purpose of the article is to develop a one- and two-element model of the fees calculation for the use of track infrastructure of Ukrainian railway transport. Methodology. On the basis of this one can consider that when planning the planned preventive track repair works and the amount of depreciation charges the guiding criterion is not the amount of progress it is the operating life of the track infrastructure facilities. The cost of PPTRW is determined on the basis of the following: the classification track repairs; typical technological processes for track repairs; technology based time standards for PPTRW; costs for the work of people, performing the PPTRW, their hourly wage rates according to the Order 98-Ts; the operating cost of machinery; regulated list; norms of expenditures and costs of materials and products (they have the largest share of the costs for repairs; railway rates; average distances for transportation of materials used during repair; standards of general production expenses and the administrative costs. Findings. The models offered in article allow executing the objective account of expenses in travelling facilities for the purpose of calculation of the proved size of indemnification and necessary size of profit, the sufficient enterprises for effective activity of a travelling infrastructure. Originality. The methodological bases of determination the fees (payments for the use of track infrastructure on one- and two-element base taking into account the experience of railways in the EC countries and the current transport legislation were grounded. Practical value. The article proposes the one- and two-element models of calculating the fees (payments for the TIF use, accounting the applicable requirements of European transport legislation, which provides the expense compensation and income formation, sufficient for economic incentives of the efficient operation of the TIE of Ukrainian railway transport.
Model test and CFD calculation of a cavitating bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Necker, J; Aschenbrenner, T, E-mail: joerg.necker@voith.co [Voith Hydro Holding GmbH and Co. KG Alexanderstrasse 11, 89522 Heidenheim (Germany)
2010-08-15
The flow in a horizontal shaft bulb turbine is calculated as a two-phase flow with a commercial Computational Fluid Dynamics (CFD-)-code including cavitation model. The results are compared with experimental results achieved at a closed loop test rig for model turbines. On the model test rig, for a certain operating point (i.e. volume flow, net head, blade angle, guide vane opening) the pressure behind the turbine is lowered (i.e. the Thoma-coefficient {sigma} is lowered) and the efficiency of the turbine is recorded. The measured values can be depicted in a so-called {sigma}-break curve or {eta}- {sigma}-diagram. Usually, the efficiency is independent of the Thoma-coefficient up to a certain value. When lowering the Thoma-coefficient below this value the efficiency will drop rapidly. Visual observations of the different cavitation conditions complete the experiment. In analogy, several calculations are done for different Thoma-coefficients {sigma}and the corresponding hydraulic losses of the runner are evaluated quantitatively. For a low {sigma}-value showing in the experiment significant efficiency loss, the the change of volume flow in the experiment was simulated. Besides, the fraction of water vapour as an indication of the size of the cavitation cavity is analyzed qualitatively. The experimentally and the numerically obtained results are compared and show a good agreement. Especially the drop in efficiency can be calculated with satisfying accuracy. This drop in efficiency is of high practical importance since it is one criterion to determine the admissible cavitation in a bulb-turbine. The visual impression of the cavitation in the CFD-analysis is well in accordance with the observed cavitation bubbles recorded on sketches and/or photographs.
Space resection model calculation based on Random Sample Consensus algorithm
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Shell-model calculations for p-shell hypernuclei
Millener, D. J.
2012-01-01
The interpretation of hypernuclear gamma-ray data for p-shell hypernuclei in terms of shell-model calculations that include the coupling of Lambda- and Sigma-hypernuclear states is briefly reviewed. Next, Lambda 8Li, Lambda 8Be, and Lambda 9Li are considered, both to exhibit features of Lambda-Sigma coupling and as possible source of observed, but unassigned, hypernuclear gamma rays. Then, the feasibility of measuring the ground-state doublet spacing of Lambda 10Be, which, like Lambda 9Li, co...
Model and Calculation of Container Port Logistics Enterprises Efficiency Indexes
Directory of Open Access Journals (Sweden)
Xiao Hong
2013-04-01
Full Text Available The throughput of China’s container port is growing fast, but the earnings of inland port enterprises are not so good. Firstly ,the initial efficiency evaluation indexes of port logistics are reduced and screened by rough set model, and then logistics performance indexes weight are assigned by the rough totalitarian calculation method. As well, the rank of the indexes and the important indexes are picked up by combining with ABC management method. So the port logistics enterprises can monitor the key indexes to reduce cost and improve the efficiency of the logistics operations.
GMCALC: a calculator for the Georgi-Machacek model
Hartling, Katy; Logan, Heather E
2014-01-01
The Georgi-Machacek model adds scalar triplets to the Standard Model Higgs sector in such a way as to preserve custodial SU(2) symmetry in the scalar potential. This allows the triplets to have a non-negligible vacuum expectation value while satisfying constraints from the rho parameter. Depending on the parameters, the 125~GeV neutral Higgs particle can have couplings to WW and ZZ larger than in the Standard Model due to mixing with the triplets. The model also contains singly- and doubly-charged Higgs particles that couple to vector boson pairs at tree level (WZ and like-sign WW, respectively). GMCALC is a self-contained FORTRAN program that, given a set of input parameters, calculates the particle spectrum and tree-level couplings in the Georgi-Machacek model, checks theoretical and indirect constraints, and computes the branching ratios and total widths of the scalars. It also generates a param_card.dat file for MadGraph5 to be used with the corresponding FeynRules model implementation.
Mathematical Model and Programming in VBA Excel for Package Calculation
Directory of Open Access Journals (Sweden)
João Daniel Reis Lessa
2016-05-01
Full Text Available The industrial logistics is a fundamental pillar for the survival of companies in the actual increasingly competitive market. It is not exclusively about controlling the flow of external material between suppliers and the company, but for developing a detailed study of how to plan, control, handle and package those materials as well. Logistics activities must ensure the maximum efficiency in using corporate resources once they do not add value to the final product. The creation of a logistic plan, for each piece of the company’s production, has to adapt the demand parameters, seasonal or not, in the timeline. Thus, the definition of packaging (transportation and consumption must adjust in accordance with the demand, in order to allow the logistic planning to work, constantly, with order of economy batches. The packaging calculation for each part in every demand can become well complicated due to the large amount of parts in the production process. Automating the calculation process for choosing the right package for each piece is an effective method in logistics planning. This article will expose a simple and practical mathematical model for automating the packaging calculation and a logic program, created in Visual Basic language in the Excel software, used for creating graphic designs that show how the packages are being filled.
Update on PHELIX Pulsed-Power Hydrodynamics Experiments and Modeling
Rousculp, Christopher; Reass, William; Oro, David; Griego, Jeffery; Turchi, Peter; Reinovsky, Robert; Devolder, Barbara
2013-10-01
The PHELIX pulsed-power driver is a 300 kJ, portable, transformer-coupled, capacitor bank capable of delivering 3-5 MA, 10 μs pulse into a low inductance load. Here we describe further testing and hydrodynamics experiments. First, a 4 nH static inductive load has been constructed. This allows for repetitive high-voltage, high-current testing of the system. Results are used in the calibration of simple circuit models and numerical simulations across a range of bank charges (+/-20 < V0 < +/-40 kV). Furthermore, a dynamic liner-on-target load experiment has been conducted to explore the shock-launched transport of particulates (diam. ~ 1 μm) from a surface. The trajectories of the particulates are diagnosed with radiography. Results are compared to 2D hydro-code simulations. Finally, initial studies are underway to assess the feasibility of using the PHELIX driver as an electromagnetic launcher for planer shock-physics experiments. Work supported by United States-DOE under contract DE-AC52-06NA25396.
Calculating fermion masses in superstring derived standard-like models
Energy Technology Data Exchange (ETDEWEB)
Faraggi, A.E.
1996-04-01
One of the intriguing achievements of the superstring derived standard-like models in the free fermionic formulation is the possible explanation of the top quark mass hierarchy and the successful prediction of the top quark mass. An important property of the superstring derived standard-like models, which enhances their predictive power, is the existence of three and only three generations in the massless spectrum. Up to some motivated assumptions with regard to the light Higgs spectrum, it is then possible to calculate the fermion masses in terms of string tree level amplitudes and some VEVs that parameterize the string vacuum. I discuss the calculation of the heavy generation masses in the superstring derived standard-like models. The top quark Yukawa coupling is obtained from a cubic level mass term while the bottom quark and tau lepton mass terms are obtained from nonrenormalizable terms. The calculation of the heavy fermion Yukawa couplings is outlined in detail in a specific toy model. The dependence of the effective bottom quark and tau lepton Yukawa couplings on the flat directions at the string scale is examined. The gauge and Yukawa couplings are extrapolated from the string unification scale to low energies. Agreement with {alpha}{sub strong}, sin{sup 2} {theta}{sub W} and {alpha}{sub em} at M{sub Z} is imposed, which necessitates the existence of intermediate matter thresholds. The needed intermediate matter thresholds exist in the specific toy model. The effect of the intermediate matter thresholds on the extrapolated Yukawa couplings is studied. It is observed that the intermediate matter thresholds help to maintain the correct b/{tau} mass relation. It is found that for a large portion of the parameter space, the LEP precision data for {alpha}{sub strong}, sin{sup 2} {theta}{sub W} and {alpha}{sub em}, as well as the top quark mass and the b/{tau} mass relation can all simultaneously be consistent with the superstring derived standard-like models.
Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process
Directory of Open Access Journals (Sweden)
ZHANG Feng
2016-02-01
Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.
Grip, Niklas; Sabourova, Natalia; Tu, Yongming
2017-02-01
Sensitivity-based Finite Element Model Updating (FEMU) is one of the widely accepted techniques used for damage identification in structures. FEMU can be formulated as a numerical optimization problem and solved iteratively making automatic updating of the unknown model parameters by minimizing the difference between measured and analytical structural properties. However, in the presence of noise in the measurements, the updating results are usually prone to errors. This is mathematically described as instability of the damage identification as an inverse problem. One way to resolve this problem is by using regularization. In this paper, we compare a well established interpolation-based regularization method against methods based on the minimization of the total variation of the unknown model parameters. These are new regularization methods for structural damage identification. We investigate how using Huber and pseudo Huber functions in the definition of total variation affects important properties of the methods. For instance, for well-localized damages the results show a clear advantage of the total variation based regularization in terms of the identified location and severity of damage compared with the interpolation-based solution. For a practical test of the proposed method we use a reinforced concrete plate. Measurements and analysis were performed first on an undamaged plate, and then repeated after applying four different degrees of damage.
Davies, H. C.; Turner, R. E.
1977-01-01
A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.
Testing the prognostic accuracy of the updated pediatric sepsis biomarker risk model.
Directory of Open Access Journals (Sweden)
Hector R Wong
Full Text Available BACKGROUND: We previously derived and validated a risk model to estimate mortality probability in children with septic shock (PERSEVERE; PEdiatRic SEpsis biomarkEr Risk modEl. PERSEVERE uses five biomarkers and age to estimate mortality probability. After the initial derivation and validation of PERSEVERE, we combined the derivation and validation cohorts (n = 355 and updated PERSEVERE. An important step in the development of updated risk models is to test their accuracy using an independent test cohort. OBJECTIVE: To test the prognostic accuracy of the updated version PERSEVERE in an independent test cohort. METHODS: Study subjects were recruited from multiple pediatric intensive care units in the United States. Biomarkers were measured in 182 pediatric subjects with septic shock using serum samples obtained during the first 24 hours of presentation. The accuracy of PERSEVERE 28-day mortality risk estimate was tested using diagnostic test statistics, and the net reclassification improvement (NRI was used to test whether PERSEVERE adds information to a physiology-based scoring system. RESULTS: Mortality in the test cohort was 13.2%. Using a risk cut-off of 2.5%, the sensitivity of PERSEVERE for mortality was 83% (95% CI 62-95, specificity was 75% (68-82, positive predictive value was 34% (22-47, and negative predictive value was 97% (91-99. The area under the receiver operating characteristic curve was 0.81 (0.70-0.92. The false positive subjects had a greater degree of organ failure burden and longer intensive care unit length of stay, compared to the true negative subjects. When adding PERSEVERE to a physiology-based scoring system, the net reclassification improvement was 0.91 (0.47-1.35; p<0.001. CONCLUSIONS: The updated version of PERSEVERE estimates mortality probability reliably in a heterogeneous test cohort of children with septic shock and provides information over and above a physiology-based scoring system.
Recent Developments in No-Core Shell-Model Calculations
Energy Technology Data Exchange (ETDEWEB)
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
[Diffusion factor calculation for TIP4P model of water].
Zlenko, D V
2012-01-01
A molecular dynamics study has been undertaken for a model of liquid TIP4P water. Thermal dependencies of water density and radial distribution functions were calculated for model verification. Three methods have been used for calculation of diffusion factor thermal dependencies. Their sensitivity to molecular system size and length of used trajectory has been analyzed. It has been shown that Green-Kubo formula-based approach which associates diffusion factor with speed autocorrelation function integral is preferred in case of short MD simulations. The second approach based on Einstein equation which associates mean square displacement of molecule with time is preferred in case of long simulations. It has been also demonstrated that it is possible to modify the second approach to make it more stable and reliable. This modification is to use a slope of the graph of the mean square displacement on time as the estimation of the diffusion factor instead of the ratio of molecule mean square displacement and time.
Belcher, Wayne R.; Sweetkind, Donald S.; Faunt, Claudia C.; Pavelko, Michael T.; Hill, Mary C.
2017-01-19
Since the original publication of the Death Valley regional groundwater flow system (DVRFS) numerical model in 2004, more information on the regional groundwater flow system in the form of new data and interpretations has been compiled. Cooperators such as the Bureau of Land Management, National Park Service, U.S. Fish and Wildlife Service, the Department of Energy, and Nye County, Nevada, recognized a need to update the existing regional numerical model to maintain its viability as a groundwater management tool for regional stakeholders. The existing DVRFS numerical flow model was converted to MODFLOW-2005, updated with the latest available data, and recalibrated. Five main data sets were revised: (1) recharge from precipitation varying in time and space, (2) pumping data, (3) water-level observations, (4) an updated regional potentiometric map, and (5) a revision to the digital hydrogeologic framework model.The resulting DVRFS version 2.0 (v. 2.0) numerical flow model simulates groundwater flow conditions for the Death Valley region from 1913 to 2003 to correspond to the time frame for the most recently published (2008) water-use data. The DVRFS v 2.0 model was calibrated by using the Tikhonov regularization functionality in the parameter estimation and predictive uncertainty software PEST. In order to assess the accuracy of the numerical flow model in simulating regional flow, the fit of simulated to target values (consisting of hydraulic heads and flows, including evapotranspiration and spring discharge, flow across the model boundary, and interbasin flow; the regional water budget; values of parameter estimates; and sensitivities) was evaluated. This evaluation showed that DVRFS v. 2.0 simulates conditions similar to DVRFS v. 1.0. Comparisons of the target values with simulated values also indicate that they match reasonably well and in some cases (boundary flows and discharge) significantly better than in DVRFS v. 1.0.
The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application
Energy Technology Data Exchange (ETDEWEB)
Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation (United States)
2015-01-01
Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.
Energy Technology Data Exchange (ETDEWEB)
Chen, Yun; Geng, Chao-Qiang [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China); Cao, Shuo; Huang, Yu-Mei; Zhu, Zong-Hong, E-mail: chenyun@bao.ac.cn, E-mail: geng@phys.nthu.edu.tw, E-mail: caoshuo@bnu.edu.cn, E-mail: huangymei@gmail.com, E-mail: zhuzh@bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing 100875 (China)
2015-02-01
We constrain the scalar field dark energy model with an inverse power-law potential, i.e., V(φ) ∝ φ{sup −α} (α > 0), from a set of recent cosmological observations by compiling an updated sample of Hubble parameter measurements including 30 independent data points. Our results show that the constraining power of the updated sample of H(z) data with the HST prior on H{sub 0} is stronger than those of the SCP Union2 and Union2.1 compilations. A recent sample of strong gravitational lensing systems is also adopted to confine the model even though the results are not significant. A joint analysis of the strong gravitational lensing data with the more restrictive updated Hubble parameter measurements and the Type Ia supernovae data from SCP Union2 indicates that the recent observations still can not distinguish whether dark energy is a time-independent cosmological constant or a time-varying dynamical component.
Propagation Modeling of Food Safety Crisis Information Update Based on the Multi-agent System
Directory of Open Access Journals (Sweden)
Meihong Wu
2015-08-01
Full Text Available This study propose a new multi-agent system frame based on epistemic default complex adaptive theory and use the agent based simulation and modeling the information updating process to study food safety crisis information dissemination. Then, we explore interaction effect between each agent in food safety crisis information dissemination at the current environment and mostly reveals how the government agent, food company agent and network media agent influence users confidence in food safety. The information updating process give a description on how to guide a normal spread of food safety crisis in public opinion in the current environment and how to enhance the confidence of food quality and safety of the average users.
Energy Technology Data Exchange (ETDEWEB)
Kneur, J.L
2006-06-15
This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
2016-11-03
This final rule updates the Home Health Prospective Payment System (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor; effective for home health episodes of care ending on or after January 1, 2017. This rule also: Implements the last year of the 4-year phase-in of the rebasing adjustments to the HH PPS payment rates; updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the 2nd-year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between CY 2012 and CY 2014; finalizes changes to the methodology used to calculate payments made under the HH PPS for high-cost "outlier" episodes of care; implements changes in payment for furnishing Negative Pressure Wound Therapy (NPWT) using a disposable device for patients under a home health plan of care; discusses our efforts to monitor the potential impacts of the rebasing adjustments; includes an update on subsequent research and analysis as a result of the findings from the home health study; and finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model, which was implemented on January 1, 2016; and updates to the Home Health Quality Reporting Program (HH QRP).
Su, Chiu-Wen; Ming-Fang Yen, Amy; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng
2017-07-28
Background The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area receiver operating characteristics (AUROC) curve, but how the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiological study, required for constructing a prediction model that affects its performance has not been researched yet. Methods We conducted a two-stage design by first proposing a validation study to calibrate the CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected the performance of the updated prediction model was quantified by comparing the AUROC curves between the original and the updated model. Results The estimates regarding the calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, the clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% CI: 61.7%-63.6%) for the non-updated model to 68.9% (95% CI: 68.0%-69.6%) for the updated one, reaching a statistically significant difference (P periodontal disease as measured by the calibrated CPI derived from a large epidemiological survey.
Nonlinear damping calculation in cylindrical gear dynamic modeling
Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc
2012-04-01
The nonlinear dynamic problem posed by cylindrical gear systems has been extensively covered in the literature. Nonetheless, a significant proportion of the mechanisms involved in damping generation remains to be investigated and described. The main objective of this study is to contribute to this task. Overall, damping is assumed to consist of three sources: surrounding element contribution, hysteresis of the teeth, and oil squeeze damping. The first two contributions are considered to be commensurate with the supported load; for its part however, squeeze damping is formulated using expressions developed from the Reynolds equation. A lubricated impact analysis between the teeth is introduced in this study for the minimum film thickness calculation during contact losses. The dynamic transmission error (DTE) obtained from the final model showed close agreement with experimental measurements available in the literature. The nonlinear damping ratio calculated at different mesh frequencies and torque amplitudes presented average values between 5.3 percent and 8 percent, which is comparable to the constant 8 percent ratio used in published numerical simulations of an equivalent gear pair. A close analysis of the oil squeeze damping evidenced the inverse relationship between this damping effect and the applied load.
Unquenched quark-model calculation of X(3872) electromagnetic decays
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Marco [Universidade de Lisboa, Centro de Fisica Teorica de Particulas, Instituto Superior Tecnico, Lisbon (Portugal); Rupp, George [Universidade de Lisboa, Centro de Fisica das Interaccoes Fundamentais, Instituto Superior Tecnico, Lisbon (Portugal); Beveren, Eef van [Universidade de Coimbra, Departamento de Fisica, Centro de Fisica Computacional, Coimbra (Portugal)
2015-01-01
A recent quark-model description of X(3872) as an unquenched 2{sup 3}P{sub 1} c anti c state is generalised by now including all relevant meson.meson configurations, in order to calculate the widths of the experimentally observed electromagnetic decays X(3872) → γJ/ψ and X(3872) → γψ(2S). Interestingly, the inclusion of additional two-meson channels, most importantly D{sup ±}D{sup *-+}, leads to a sizeable increase of the c anti c probability in the total wave function, although the D{sup 0} anti D{sup *0} component remains the dominant one. As for the electromagnetic decays, unquenching strongly reduces the γψ(2S) decay rate; yet it even more sharply enhances the γJ/ψ rate, resulting in a decay ratio compatible with one experimental observation but in slight disagreement with two others. Nevertheless, the results show a dramatic improvement as compared to a quenched calculation with the same confinement force and parameters. Concretely, we obtain Γ (X(3872) → γψ(2S)) = 28.9 keV and Γ (X(3872) → γJ/ψ) = 24.7 keV, with branching ratio R{sub γψ} = 1.17. (orig.)
Update of the Computing Models of the WLCG and the LHC Experiments
Bird, I; Carminati, F; Cattaneo, M; Clarke, P; Fisk, I; Girone, M; Harvey, J; Kersevan, B; Mato, P; Mount, R; Panzer-Steindel, B; CERN. Geneva. The LHC experiments Committee; LHCC
2014-01-01
In preparation for the data collection and analysis in LHC Run 2, the LHCC and Computing Scrutiny Group of the RRB requested a detailed review of the current computing models of the LHC experiments and a consolidated plan for the future computing needs. This document represents the status of the work of the WLCG collaboration and the four LHC experiments in updating the computing models to reflect the advances in understanding of the most effective ways to use the distributed computing and storage resources, based upon the experience gained during LHC Run 1.
Neuroadaptation in nicotine addiction: update on the sensitization-homeostasis model.
DiFranza, Joseph R; Huang, Wei; King, Jean
2012-10-17
The role of neuronal plasticity in supporting the addictive state has generated much research and some conceptual theories. One such theory, the sensitization-homeostasis (SH) model, postulates that nicotine suppresses craving circuits, and this triggers the development of homeostatic adaptations that autonomously support craving. Based on clinical studies, the SH model predicts the existence of three distinct forms of neuroplasticity that are responsible for withdrawal, tolerance and the resolution of withdrawal. Over the past decade, many controversial aspects of the SH model have become well established by the literature, while some details have been disproven. Here we update the model based on new studies showing that nicotine dependence develops through a set sequence of symptoms in all smokers, and that the latency to withdrawal, the time it takes for withdrawal symptoms to appear during abstinence, is initially very long but shortens by several orders of magnitude over time. We conclude by outlining directions for future research based on the updated model, and commenting on how new experimental studies can gain from the framework put forth in the SH model.
Neuroadaptation in Nicotine Addiction: Update on the Sensitization-Homeostasis Model
Directory of Open Access Journals (Sweden)
Jean King
2012-10-01
Full Text Available The role of neuronal plasticity in supporting the addictive state has generated much research and some conceptual theories. One such theory, the sensitization-homeostasis (SH model, postulates that nicotine suppresses craving circuits, and this triggers the development of homeostatic adaptations that autonomously support craving. Based on clinical studies, the SH model predicts the existence of three distinct forms of neuroplasticity that are responsible for withdrawal, tolerance and the resolution of withdrawal. Over the past decade, many controversial aspects of the SH model have become well established by the literature, while some details have been disproven. Here we update the model based on new studies showing that nicotine dependence develops through a set sequence of symptoms in all smokers, and that the latency to withdrawal, the time it takes for withdrawal symptoms to appear during abstinence, is initially very long but shortens by several orders of magnitude over time. We conclude by outlining directions for future research based on the updated model, and commenting on how new experimental studies can gain from the framework put forth in the SH model.
Relativistic effects in model calculations of double parton distribution function
Rinaldi, Matteo
2016-01-01
In this paper we consider double parton distribution functions (dPDFs) which are the main non perturbative ingredients appearing in the double parton scattering cross section formula in hadronic collisions. By using recent calculation of dPDFs by means of constituent quark models within the so called Light-Front approach, we investigate the role of relativistic effects on dPDFs. We find, in particular, that the so called Melosh operators, which allow to properly convert the LF spin into the canonical one and incorporate a proper treatment of boosts, produce sizeable effects on dPDFs. We discuss specific partonic correlations induced by these operators in transverse plane which are relevant to the proton structure and study under which conditions these results are stable against variations in the choice of the proton wave function.
Observations, Thermochemical Calculations, and Modeling of Exoplanetary Atmospheres
Blecic, Jasmina
2016-01-01
This dissertation as a whole aims to provide means to better understand hot-Jupiter planets through observing, performing thermochemical calculations, and modeling their atmospheres. We used Spitzer multi-wavelength secondary-eclipse observations and targets with high signal-to-noise ratios, as their deep eclipses allow us to detect signatures of spectral features and assess planetary atmospheric structure and composition with greater certainty. Chapter 1 gives a short introduction. Chapter 2 presents the Spitzer secondary-eclipse analysis and atmospheric characterization of WASP-14b. WASP-14b is a highly irradiated, transiting hot Jupiter. By applying a Bayesian approach in the atmospheric analysis, we found an absence of thermal inversion contrary to theoretical predictions. Chapter 3 describes the infrared observations of WASP-43b Spitzer secondary eclipses, data analysis, and atmospheric characterization. WASP-43b is one of the closest-orbiting hot Jupiters, orbiting one of the coolest stars with a hot Ju...
Quantum plasmonics: from jellium models to ab initio calculations
Directory of Open Access Journals (Sweden)
Varas Alejandro
2016-08-01
Full Text Available Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.
The EOSTA model for opacities and EOS calculations
Barshalom, Avraham; Oreg, Joseph
2007-11-01
The EOSTA model developed recently combines the STA and INFERNO models to calculate opacities and EOS on the same footing. The quantum treatment of the plasma continuum and the inclusion of the resulted shape resonances yield a smooth behavior of the EOS and opacity global quantities vs density and temperature. We will describe the combined model and focus on its latest improvements. In particular we have extended the use of the special representation of the relativistic virial theorem to obtain an exact differential equation for the free energy. This equation, combined with a boundary condition at the zero pressure point, serves to advance the LDA EOS results significantly. The method focuses on applicability to high temperature and high density plasmas, warm dens matter etc. but applies at low temperatures as well treating fluids and even solids. Excellent agreement is obtained with experiments covering a wide range of density and temperature. The code is now used to create EOS and opacity databases for the use of hydro-dynamical simulations.
Selection of models to calculate the LLW source term
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.
Current Developments in Dementia Risk Prediction Modelling: An Updated Systematic Review.
Directory of Open Access Journals (Sweden)
Eugene Y H Tang
Full Text Available Accurate identification of individuals at high risk of dementia influences clinical care, inclusion criteria for clinical trials and development of preventative strategies. Numerous models have been developed for predicting dementia. To evaluate these models we undertook a systematic review in 2010 and updated this in 2014 due to the increase in research published in this area. Here we include a critique of the variables selected for inclusion and an assessment of model prognostic performance.Our previous systematic review was updated with a search from January 2009 to March 2014 in electronic databases (MEDLINE, Embase, Scopus, Web of Science. Articles examining risk of dementia in non-demented individuals and including measures of sensitivity, specificity or the area under the curve (AUC or c-statistic were included.In total, 1,234 articles were identified from the search; 21 articles met inclusion criteria. New developments in dementia risk prediction include the testing of non-APOE genes, use of non-traditional dementia risk factors, incorporation of diet, physical function and ethnicity, and model development in specific subgroups of the population including individuals with diabetes and those with different educational levels. Four models have been externally validated. Three studies considered time or cost implications of computing the model.There is no one model that is recommended for dementia risk prediction in population-based settings. Further, it is unlikely that one model will fit all. Consideration of the optimal features of new models should focus on methodology (setting/sample, model development and testing in a replication cohort and the acceptability and cost of attaining the risk variables included in the prediction score. Further work is required to validate existing models or develop new ones in different populations as well as determine the ethical implications of dementia risk prediction, before applying the particular
Directory of Open Access Journals (Sweden)
Marcin Luczak
2014-01-01
Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.
A Review of the Updated Pharmacophore for the Alpha 5 GABA(A Benzodiazepine Receptor Model
Directory of Open Access Journals (Sweden)
Terry Clayton
2015-01-01
Full Text Available An updated model of the GABA(A benzodiazepine receptor pharmacophore of the α5-BzR/GABA(A subtype has been constructed prompted by the synthesis of subtype selective ligands in light of the recent developments in both ligand synthesis, behavioral studies, and molecular modeling studies of the binding site itself. A number of BzR/GABA(A α5 subtype selective compounds were synthesized, notably α5-subtype selective inverse agonist PWZ-029 (1 which is active in enhancing cognition in both rodents and primates. In addition, a chiral positive allosteric modulator (PAM, SH-053-2′F-R-CH3 (2, has been shown to reverse the deleterious effects in the MAM-model of schizophrenia as well as alleviate constriction in airway smooth muscle. Presented here is an updated model of the pharmacophore for α5β2γ2 Bz/GABA(A receptors, including a rendering of PWZ-029 docked within the α5-binding pocket showing specific interactions of the molecule with the receptor. Differences in the included volume as compared to α1β2γ2, α2β2γ2, and α3β2γ2 will be illustrated for clarity. These new models enhance the ability to understand structural characteristics of ligands which act as agonists, antagonists, or inverse agonists at the Bz BS of GABA(A receptors.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
Energy Technology Data Exchange (ETDEWEB)
Parmeggiani, Stefano [Wave Dragon Ltd., London (United Kingdom); Kofoed, Jens Peter [Aalborg Univ. (Denmark). Department of Civil Engineering; Friis-Madsen, Erik [Wave Dragon Ltd., London (United Kingdom)
2013-04-15
An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
Directory of Open Access Journals (Sweden)
Erik Friis-Madsen
2013-04-01
Full Text Available An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.
DEFF Research Database (Denmark)
Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars
th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model......Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria...
[Model calculation to explain the BSE-incidence in Germany].
Oberthür, Radulf C
2004-01-01
The future development of BSE-incidence in Germany is investigated using a simple epidemiological model calculation. Starting point is the development of the incidence of confirmed suspect BSE-cases in Great Britain since 1988, the hitherto known mechanisms of transmission and the measures taken to decrease the risk of transmission as well as the development of the BSE-incidence in Germany obtained from active post mortem laboratory testing of all cattle older then 24 months. The risk of transmission is characterized by the reproduction ratio of the disease. There is a shift in time between the risk of BSE transmission and the BSE incidence caused by the incubation time of more than 4 years. The observed decrease of the incidence in Germany from 2001 to 2003 is not a consequence of the measures taken at the end of 2000 to contain the disease. It can rather be explained by an import of BSE contaminated products from countries with a high BSE incidence in the years 1995/96 being used in calf feeding in Germany. From the future course of the BSE-incidence in Germany after 2003 a quantification of the recycling rate of BSE-infected material within Germany before the end of 2000 will be possible by use of the proposed model if the active surveillance is continued.
Comparative analysis of calculation models of railway subgrade
Directory of Open Access Journals (Sweden)
I.O. Sviatko
2013-08-01
Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic
Updated Hungarian Gravity Field Solution Based on Fifth Generation GOCE Gravity Field Models
Toth, Gyula; Foldvary, Lorant
2015-03-01
With the completion of the ESA's GOCE satellite's mission fifth generation gravity field models are available from the ESA's GOCE High Processing Facility. Our contribution is an updated gravity field solution for Hungary using the latest DIR R05 GOCE gravity field model. The solution methodology is least squares gravity field parameter estimation using Spherical Radial Base Functions (SRBF). Regional datasets include deflections of the vertical (DOV), gravity anomalies and quasigeoid heights by GPS/levelling. The GOCE DIR R05 model has been combined with the EGM20008 model and has been evaluated in comparison with the EGM2008 and EIGEN-6C3stat models to assess the performance of our regional gravity field solution.
Relative Binding Free Energy Calculations Applied to Protein Homology Models.
Cappel, Daniel; Hall, Michelle Lynn; Lenselink, Eelke B; Beuming, Thijs; Qi, Jun; Bradner, James; Sherman, Woody
2016-12-27
A significant challenge and potential high-value application of computer-aided drug design is the accurate prediction of protein-ligand binding affinities. Free energy perturbation (FEP) using molecular dynamics (MD) sampling is among the most suitable approaches to achieve accurate binding free energy predictions, due to the rigorous statistical framework of the methodology, correct representation of the energetics, and thorough treatment of the important degrees of freedom in the system (including explicit waters). Recent advances in sampling methods and force fields coupled with vast increases in computational resources have made FEP a viable technology to drive hit-to-lead and lead optimization, allowing for more efficient cycles of medicinal chemistry and the possibility to explore much larger chemical spaces. However, previous FEP applications have focused on systems with high-resolution crystal structures of the target as starting points-something that is not always available in drug discovery projects. As such, the ability to apply FEP on homology models would greatly expand the domain of applicability of FEP in drug discovery. In this work we apply a particular implementation of FEP, called FEP+, on congeneric ligand series binding to four diverse targets: a kinase (Tyk2), an epigenetic bromodomain (BRD4), a transmembrane GPCR (A2A), and a protein-protein interaction interface (BCL-2 family protein MCL-1). We apply FEP+ using both crystal structures and homology models as starting points and find that the performance using homology models is generally on a par with the results when using crystal structures. The robustness of the calculations to structural variations in the input models can likely be attributed to the conformational sampling in the molecular dynamics simulations, which allows the modeled receptor to adapt to the "real" conformation for each ligand in the series. This work exemplifies the advantages of using all-atom simulation methods with
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Groundwater flow modelling under ice sheet conditions. Scoping calculations
Energy Technology Data Exchange (ETDEWEB)
Jaquet, O.; Namar, R. (In2Earth Modelling Ltd (Switzerland)); Jansson, P. (Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden))
2010-10-15
The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the
Institute of Scientific and Technical Information of China (English)
LIN Xiankun; LI Yanjun; LI Haolin
2014-01-01
Linear motors generate high heat and cause significant deformation in high speed direct feed drive mechanisms. It is relevant to estimate their deformation behavior to improve their application in precision machine tools. This paper describes a method to estimate its thermal deformation based on updated finite element(FE) model methods. Firstly, a FE model is established for a linear motor drive test rig that includes the correlation between temperature rise and its resulting deformation. The relationship between the input and output variables of the FE model is identified with a modified multivariate input/output least square support vector regression machine. Additionally, the temperature rise and displacements at some critical points on the mechanism are obtained experimentally by a system of thermocouples and an interferometer. The FE model is updated through intelligent comparison between the experimentally measured values and the results from the regression machine. The experiments for testing thermal behavior along with the updated FE model simulations is conducted on the test rig in reciprocating cycle drive conditions. The results show that the intelligently updated FE model can be implemented to analyze the temperature variation distribution of the mechanism and to estimate its thermal behavior. The accuracy of the thermal behavior estimation with the optimally updated method can be more than double that of the initial theoretical FE model. This paper provides a simulation method that is effective to estimate the thermal behavior of the direct feed drive mechanism with high accuracy.
Finite element model updating of concrete structures based on imprecise probability
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
The updated geodetic mean dynamic topography model – DTU15MDT
DEFF Research Database (Denmark)
Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai
An update to the global mean dynamic topography model DTU13MDT is presented. For DTU15MDT the newer gravity model EIGEN-6C4 has been combined with the DTU15MSS mean sea surface model to construct this global mean dynamic topography model. The EIGEN-6C4 is derived using the full series of GOCE data...... re-tracked CRYOSAT-2 altimetry also, hence, increasing its resolution. Also, some issues in the Polar regions have been solved. Finally, the filtering was re-evaluated by adjusting the quasi-gaussian filter width to optimize the fit to drifter velocities. Subsequently, geostrophic surface currents...... were derived from the DTU15MDT. The results show that geostrophic surface currents associated with the mean circulation have been further improved and that currents having speeds down to below 4 cm/s have been recovered....
From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means
Directory of Open Access Journals (Sweden)
Pierre Pénet
2014-06-01
Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts
Full waveform modelling and misfit calculation using the VERCE platform
Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas
2016-04-01
simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.
Coarse grained model for calculating the ion mobility of hydrocarbons
Kuroboshi, Y.; Takemura, K.
2016-12-01
Hydrocarbons are widely used as insulating compounds. However, their fundamental characteristics in conduction phenomena are not completely understood. A great deal of effort is required to determine reasonable ionic behavior from experiments because of their complicated procedures and tight controls of the temperature and the purity of the liquids. In order to understand the conduction phenomena, we have theoretically calculated the ion mobilities of hydrocarbons and investigated their characteristics using the coarse grained model in molecular dynamics simulations. We assumed a molecule of hydrocarbons to be a bead and simulated its dependence on the viscosity, electric field, and temperature. Furthermore, we verified the suitability of the conformation, scale size, and long-range interactions for the ion mobility. The results of the simulations show that the ion mobility values agree reasonably well with the values from Walden's rule and depend on the viscosity but not on the electric field. The ion mobility and self-diffusion coefficient exponentially increase with increasing temperature, while the activation energy decreases with increasing molecular size. These values and characteristics of the ion mobility are in reasonable agreement with experimental results. In the future, we can understand not only the ion mobilies of hydrocarbons in conduction, but also we can predict general phenomena in electrochemistry with molecular dynamics simulations.
MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model
James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert
2008-01-01
The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.
Rotating shaft model updating from modal data by a direct energy approach : a feasibility study
Energy Technology Data Exchange (ETDEWEB)
Audebert, S. [Electricite de France (EDF), 75 - Paris (France). Direction des Etudes et Recherches; Girard, A.; Chatelain, J. [Intespace - Division Etudes et Recherche, 31 - Toulouse (France)
1996-12-31
Investigations to improve the rotating machinery monitoring tend more and more to use numerical models. The aim is to obtain multi-fluid bearing rotor models which are able to correctly represent their dynamic behaviour, either modal or forced response type. The possibility of extending the direct energy method, initially developed for undamped structures, to rotating machinery is studied. It is based on the minimization of the kinetic and strain energy gap between experimental and analytic modal data. The preliminary determination of a multi-linear bearing rotor system Eigen modes shows the problem complexity in comparison with undamped non rotating structures: taking into account gyroscopic effects and bearing damping, as factors of rotor velocities, leads to complex component Eigen modes; moreover, non symmetric matrices, related to stiffness and damping bearing contributions, induce distinct left and right-hand side Eigen modes (left hand side Eigenmodes corresponds to the adjoint structure). Theoretically, the extension of the energy method is studied, considering first the intermediate case of an undamped non gyroscopic structure, second the general case of a rotating shaft: dta used for updating procedure are Eigen frequencies and left- and right- hand side mode shapes. Since left hand side mode shapes cannot be directly measured, they are replaced by analytic ones. The method is tested on a two-bearing rotor system, with a mass added; simulated data are used, relative to a non compatible structure, i.e. which is not a part of the set of modified analytic possible structures. Parameters to be corrected are the mass density, the Young`s modulus, and the stiffness and damping linearized characteristics of bearings. If parameters are influent in regard with modes to be updates, the updating method permits a significant improvement of the gap between analytic and experimental modes, even for modes not involves in the procedure. Modal damping appears to be more
An All-Time-Domain Moving Object Data Model, Location Updating Strategy, and Position Estimation
National Research Council Canada - National Science Library
Wu, Qunyong; Huang, Junyi; Luo, Jianping; Yang, Jianjun
2015-01-01
.... Secondly, we proposed a new dynamic threshold location updating strategy. The location updating threshold was given dynamically in accordance with the velocity, accuracy, and azimuth positioning information from the GPS...
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
Energy Technology Data Exchange (ETDEWEB)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
THE SCHEME FOR THE DATABASE BUILDING AND UPDATING OF 1:10 000 DIGITAL ELEVATION MODELS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The National Bureau of Surveying and Mapping of China has planned to speed up the development of spatial data infrastructure (SDI) in the coming few years. This SDI consists of four types of digital products, i. e., digital orthophotos, digital elevation models,digital line graphs and digital raster graphs. For the DEM,a scheme for the database building and updating of 1:10 000 digital elevation models has been proposed and some experimental tests have also been accomplished. This paper describes the theoretical (and/or technical)background and reports some of the experimental results to support the scheme. Various aspects of the scheme such as accuracy, data sources, data sampling, spatial resolution, terrain modeling, data organization, etc are discussed.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
DEFF Research Database (Denmark)
Parmeggiani, Stefano; Kofoed, Jens Peter; Friis-Madsen, Erik
2013-01-01
An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration....... An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection...... of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon...
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
Updated Life-Cycle Assessment of Aluminum Production and Semi-fabrication for the GREET Model
Energy Technology Data Exchange (ETDEWEB)
Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod C. [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-01
This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each process from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.
Update of structural models at SFR nuclear waste repository, Forsmark, Sweden
Energy Technology Data Exchange (ETDEWEB)
Axelsson, C.L.; Maersk Hansen, L. [Golder Associates AB (Sweden)
1997-12-01
The final repository for low and medium-level waste, SFR, is located below the Baltic, off Forsmark. A number off various geo-scientific investigations have been performed and used to design a conceptual model of the fracture system, to be used in hydraulic modeling for a performance assessment study of the SFR facility in 1987. An updated study was reported in 1993. No formal basic revision of the original conceptual model of the fracture system around SFR has so far been made. During review, uncertainties in the model of the fracture system were found. The previous local structure model is reviewed and an alternative model is presented together with evidence for the new interpretation. The model is based on review of geophysical data, geological mapping, corelogs, hydraulic testing, water inflow etc. The fact that two different models can result from the same data represent an interpretation uncertainty which can not be resolved without more data and basic interpretations of such data. Further refinement of the structure model could only be motivated in case the two different models discussed here would lead to significantly different consequences 20 refs, 24 figs, 16 tabs
Entropy in spin foam models: the statistical calculation
Energy Technology Data Exchange (ETDEWEB)
Garcia-Islas, J Manuel, E-mail: jmgislas@leibniz.iimas.unam.m [Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, UNAM, A. Postal 20-726, 01000, Mexico DF (Mexico)
2010-07-21
Recently an idea for computing the entropy of black holes in the spin foam formalism has been introduced. Particularly complete calculations for the three-dimensional Euclidean BTZ black hole were performed. The whole calculation is based on observables living at the horizon of the black hole universe. Departing from this idea of observables living at the horizon, we now go further and compute the entropy of the BTZ black hole in the spirit of statistical mechanics. We compare both calculations and show that they are very interrelated and equally valid. This latter behaviour is certainly due to the importance of the observables.
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2
Directory of Open Access Journals (Sweden)
I. Wohltmann
2017-07-01
Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect
利用附加质量的设计参数型模型修正方法%Design parameter oriented model updating method using additional masses
Institute of Scientific and Technical Information of China (English)
李斌; 杨智春; 王乐; 付永辉
2009-01-01
As there are too many unknown variables which need to be updated for the model updating method using additional masses, the updating equation conducted by this method is an underdetermined equation, and the updating accuracy cannot meet the requirements of engineering applications.A design parameter oriented model updating method was presented to improve the accuracy of this method.Taylors series of the system matrices expanded with respect to design parameters were adopted, and the sensitivity influence matrices of stiffness and mass were derived, then updating objects were converted to design parameters.A straight wing model and a step section beam were performed as numerical examples to verify the improved method.For the two examples, local stiffness errors in the range of 30% to 50% were introduced into the initial finite element models.The calculation errors obtained by the updated finite element models were less than 1% for both examples.The results showed that the presented method can largely reduce the number of unknown variables, and just limited measured modes were needed, and the updating accuracy was also better than that of the original method, and furthermore, the physical indications of updating results were clearer than that of the original method.%针对利用附加质量的模型修正方法存在未知修正变量数目过多,修正方程欠定,修改精度无法满足工程应用要求的问题,提出了一种设计参数型修正方法来提高原始方法的修正精度.该方法利用泰勒展开公式,导出质量矩阵和刚度矩阵对设计参数的灵敏度影响矩阵,进而建立以设计参数偏差量为未知数的修正方程.分别用一个长直机翼模型和一个阶梯变截面梁模型为例,进行了该修正算法的验证.两个算例初始引入的局部刚度建模误差为30%～50%,修正后模型的计算误差都小于1%.研究结果表明,本文提出的修正方法大大减少了待求解的未知变量数目,
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
A Traffic Information Estimation Model Using Periodic Location Update Events from Cellular Network
Lin, Bon-Yeh; Chen, Chi-Hua; Lo, Chi-Chun
In recent years considerable concerns have arisen over building Intelligent Transportation System (ITS) which focuses on efficiently managing the road network. One of the important purposes of ITS is to improve the usability of transportation resources so as extend the durability of vehicle, reduce the fuel consumption and transportation times. Before this goal can be achieved, it is vital to obtain correct and real-time traffic information, so that traffic information services can be provided in a timely and effective manner. Using Mobile Stations (MS) as probe to tracking the vehicle movement is a low cost and immediately solution to obtain the real-time traffic information. In this paper, we propose a model to analyze the relation between the amount of Periodic Location Update (PLU) events and traffic density. Finally, the numerical analysis shows that this model is feasible to estimate the traffic density.
Institute of Scientific and Technical Information of China (English)
Nan Liang; Pu-Xun Wua; Zong-Hong Zhu
2011-01-01
We constrain the Cardassian expansion models from the latest observations,including the updated Gamma-ray bursts (GRBs),which are calibrated using a cosmology independent method from the Union2 compilation of type Ia supernovae (SNe Ia).By combining the GRB data with the joint observations from the Union2SNe Ia set,along with the results from the Cosmic Microwave Background radiation observation from the seven-year Wilkinson Microwave Anisotropy Probe and the baryonic acoustic oscillation observation galaxy sample from the spectroscopic Sloan Digital Sky Survey Data Release,we find significant constraints on the model parameters of the original Cardassian model ΩM0=n 282+0.015-0.014,n=0.03+0.05-0.05;and n = -0.16+0.25-3.26,β=-0.76+0.34-0.58 of the modified polytropic Cardassian model,which are consistent with the ACDM model in a l-σ confidence region.From the reconstruction of the deceleration parameter q(z) in Cardassian models,we obtain the transition redshift ZT = 0.73 ± 0.04 for the original Cardassian model and ZT = 0.68 ± 0.04 for the modified polytropic Cardassian model.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Institute of Scientific and Technical Information of China (English)
LEE Hyeon-deok; SON Myeong-jo; OH Min-jae; LEE Hyung-woo; KIM Tae-wan
2012-01-01
In early 2000,large domestic shipyards introduced shipbuilding 3D computer-aided design (CAD)to the hull production design process to define manufacturing and assembly information.The production design process accounts for most of the man-hours (M/H) of the entire design process and is closely connected to yard production because designs must take into account the production schedule of the shipyard,the current state of the dock needed to mount the ship's block,and supply information.Therefore,many shipyards are investigating the complete automation of the production design process to reduce the M/H for designers.However,these problems are still currently unresolved,and a clear direction is needed for research on the automatic design base of manufacturing rules,batches reflecting changed building specifications,batch updates of boundary information for hull members,and management of the hull model change history to automate the production design process.In this study,a process was developed to aid production design engineers in designing a new ship's hull block model from that of a similar ship previously built,based on AVEVA Marine.An automation system that uses the similar ship's hull block model is proposed to reduce M/H and human errors by the production design engineer.First,scheme files holding important information were constructed in a database to automatically update hull block model modifications.Second,for batch updates,the database's table,including building specifications and the referential integrity of a relational database were compared.In particular,this study focused on reflecting the frequent modification of building specifications and regeneration of boundary information of the adjacent panel due to changes in a specific panel.Third,the rollback function is proposed in which the database (DB) is used to return to the previously designed panels.
Baart, A.M.; Atsma, F.; McSweeney, E.N.; Moons, K.G.; Vergouwe, Y.; Kort, W.L. de
2014-01-01
BACKGROUND: Recently, sex-specific prediction models for low hemoglobin (Hb) deferral have been developed in Dutch whole blood donors. In the present study, we validated and updated the models in a cohort of Irish whole blood donors. STUDY DESIGN AND METHODS: Prospectively collected data from 45,031
Directory of Open Access Journals (Sweden)
J. M. van Wessem
2013-07-01
Full Text Available The physics package of the polar version of the regional atmospheric climate model RACMO2 has been updated from RACMO2.1 to RACMO2.3. The update constitutes, amongst others, the inclusion of a parameterization for cloud ice super-saturation, an improved turbulent and radiative flux scheme and a changed cloud scheme. In this study the effects of these changes on the modelled near-surface climate of Antarctica are presented. Significant biases remain, but overall RACMO2.3 better represents the near-surface climate in terms of the modelled surface energy balance, based on a comparison with > 750 months of data from nine automatic weather stations located in East Antarctica. Especially the representation of the sensible heat flux and net longwave radiative flux has improved with a decrease in biases of up to 40 %. These improvements are mainly caused by the inclusion of ice super-saturation, which has led to more moisture being transported onto the continent, resulting in more and optically thicker clouds and more downward longwave radiation. As a result, modelled surface temperatures have increased and the bias, when compared to 10 m snow temperatures from 64 ice core observations, has decreased from −2.3 K to −1.3 K. The weaker surface temperature inversion consequently improves the representation of the sensible heat flux, whereas wind speed remains unchanged.
Bayesian updating in a fault tree model for shipwreck risk assessment.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M
2017-03-14
Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made.
Molecular modeling study of chiral drug crystals: lattice energy calculations.
Li, Z J; Ojala, W H; Grant, D J
2001-10-01
The lattice energies of a number of chiral drugs with known crystal structures were calculated using Dreiding II force field. The lattice energies, including van der Waals, Coulombic, and hydrogen-bonding energies, of homochiral and racemic crystals of some ephedrine derivatives and of several other chiral drugs, are compared. The calculated energies are correlated with experimental data to probe the underlying intermolecular forces responsible for the formation of racemic species, racemic conglomerates, or racemic compounds, termed chiral discrimination. Comparison of the calculated energies among ephedrine derivatives reveals that a greater Coulombic energy corresponds to a higher melting temperature, while a greater van der Waals energy corresponds to a larger enthalpy of fusion. For seven pairs of homochiral and racemic compounds, correlation of the differences between the two forms in the calculated energies and experimental enthalpy of fusion suggests that the van der Waals interactions play a key role in the chiral discrimination in the crystalline state. For salts of the chiral drugs, the counter ions diminish chiral discrimination by increasing the Coulombic interactions. This result may explain why salt forms favor the formation of racemic conglomerates, thereby facilitating the resolution of racemates.
Fleming, E. L.; Burkholder, J. B.; Kurylo, M. J., III; Jackman, C. H.
2015-12-01
The atmospheric loss processes of CH4 and N2O, their estimated uncertainties, lifetimes, and impacts on ozone abundance and long-term trends are examined using atmospheric model calculations and updated kinetic and photochemical parameters and uncertainty factors from SPARC (2013). Uncertainties in CH4 loss due to reaction with OH and O(1D) have relatively small impacts on present day calculated global total ozone (±0.2-0.3%), with the OH+CH4 uncertainty impacting tropospheric ozone by ±3-5%. Uncertainty in the Cl+CH4 reaction affects the amount of chlorine in radical vs. reservoir forms and has a modest impact on present day SH polar ozone (~±6%), and on the rate of past SH polar ozone decline and future recovery. The O(1D)+N2O reaction has uncertainty in both the total rate coefficient and branching ratio for the O2+N2 and 2*NO product channels. This uncertainty results in a substantial range in present day stratospheric odd nitrogen (±10-25%) and global total ozone (±1-2.5%). This uncertainty also impacts the rate of past global total ozone decline and future recovery, with a range in future ozone projections of ±1-1.5% by 2100, relative to present day. The uncertainty ranges in calculated CH4 and N2O global lifetimes are also examined: these ranges are significantly reduced when using the updated SPARC estimated uncertainties compared with those from JPL-2010.
Towards a neural basis of music perception -- A review and updated model
Directory of Open Access Journals (Sweden)
Stefan eKoelsch
2011-06-01
Full Text Available Music perception involves acoustic analysis, auditory memory, auditoryscene analysis, processing of interval relations, of musical syntax and semantics,and activation of (premotor representations of actions. Moreover, music percep-tion potentially elicits emotions, thus giving rise to the modulation of emotionaleffector systems such as the subjective feeling system, the autonomic nervoussystem, the hormonal, and the immune system. Building on a previous article(Koelsch & Siebel, 2005, this review presents an updated model of music percep-tion and its neural correlates. The article describes processes involved in musicperception, and reports EEG and fMRI studies that inform about the time courseof these processes, as well as about where in the brain these processes might belocated.
[Social determinants of health and disability: updating the model for determination].
Tamayo, Mauro; Besoaín, Álvaro; Rebolledo, Jame
2017-03-05
Social determinants of health (SDH) are conditions in which people live. These conditions impact their lives, health status and social inclusion level. In line with the conceptual and comprehensive progression of disability, it is important to update SDH due to their broad implications in implementing health interventions in society. This proposal supports incorporating disability in the model as a structural determinant, as it would lead to the same social inclusion/exclusion of people described in other structural SDH. This proposal encourages giving importance to designing and implementing public policies to improve societal conditions and contribute to social equity. This will be an act of reparation, justice and fulfilment with the Convention on the Rights of Persons with Disabilities.
Toward a Neural Basis of Music Perception – A Review and Updated Model
Koelsch, Stefan
2011-01-01
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located. PMID:21713060
Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Hartov, Alex; Hiemenz Holton, Leslie; Roberts, David W.; Paulsen, Keith D.; Simon, David A.
2013-03-01
Dartmouth and Medtronic have established an academic-industrial partnership to develop, validate, and evaluate a multimodality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. Previous studies have shown that brain shift compensation through a modeling framework using intraoperative ultrasound and/or visible light stereovision to update preoperative MRI appears to result in improved accuracy in navigation. However, image updates have thus far only been produced retrospective to surgery in large part because of gaps in the software integration and information flow between the co-registration and tracking, image acquisition and processing, and image warping tasks which are required during a case. This paper reports the first demonstration of integration of a deformation-based image updating process for brain shift modeling with an industry-standard image guided surgery platform. Specifically, we have completed the first and most critical data transfer operation to transmit volumetric image data generated by the Dartmouth brain shift modeling process to the Medtronic StealthStation® system. StealthStation® comparison views, which allow the surgeon to verify the correspondence of the received updated image volume relative to the preoperative MRI, are presented, along with other displays of image data such as the intraoperative 3D ultrasound used to update the model. These views and data represent the first time that externally acquired and manipulated image data has been imported into the StealthStation® system through the StealthLink® portal and visualized on the StealthStation® display.
Experimental test of spatial updating models for monkey eye-head gaze shifts.
Directory of Open Access Journals (Sweden)
Tom J Van Grootel
Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
Perturbation theory calculations of model pair potential systems
Energy Technology Data Exchange (ETDEWEB)
Gong, Jianwu [Iowa State Univ., Ames, IA (United States)
2016-01-01
Helmholtz free energy is one of the most important thermodynamic properties for condensed matter systems. It is closely related to other thermodynamic properties such as chemical potential and compressibility. It is also the starting point for studies of interfacial properties and phase coexistence if free energies of different phases can be obtained. In this thesis, we will use an approach based on the Weeks-Chandler-Anderson (WCA) perturbation theory to calculate the free energy of both solid and liquid phases of Lennard-Jones pair potential systems and the free energy of liquid states of Yukawa pair potentials. Our results indicate that the perturbation theory provides an accurate approach to the free energy calculations of liquid and solid phases based upon comparisons with results from molecular dynamics (MD) and Monte Carlo (MC) simulations.
On-line updating of a distributed flow routing model - River Vistula case study
Karamuz, Emilia; Romanowicz, Renata; Napiorkowski, Jaroslaw
2015-04-01
This paper presents an application of methods of on-line updating in the River Vistula flow forecasting system. All flow-routing codes make simplifying assumptions and consider only a reduced set of the processes known to occur during a flood. Hence, all models are subject to a degree of structural error that is typically compensated for by calibration of the friction parameters. Calibrated parameter values are not, therefore, physically realistic, as in estimating them we also make allowance for a number of distinctly non-physical effects, such as model structural error and any energy losses or flow processes which occur at sub-grid scales. Calibrated model parameters are therefore area-effective, scale-dependent values which are not drawn from the same underlying statistical distribution as the equivalent at-a-point parameter of the same name. The aim of this paper is the derivation of real-time updated, on-line flow forecasts at certain strategic locations along the river, over a specified time horizon into the future, based on information on the behaviour of the flood wave upstream and available on-line measurements at a site. Depending on the length of the river reach and the slope of the river bed, a realistic forecast lead time, obtained in this manner, may range from hours to days. The information upstream can include observations of river levels and/or rainfall measurements. The proposed forecasting system will integrate distributed modelling, acting as a spatial interpolator with lumped parameter Stochastic Transfer Function models. Daily stage data from gauging stations are typically available at sites 10-60 km apart and test only the average routing performance of hydraulic models and not their ability to produce spatial predictions. Application of a distributed flow routing model makes it possible to interpolate forecasts both in time and space. This work was partly supported by the project "Stochastic flood forecasting system (The River Vistula reach
Thermochemical data for CVD modeling from ab initio calculations
Energy Technology Data Exchange (ETDEWEB)
Ho, P. [Sandia National Labs., Albuquerque, NM (United States); Melius, C.F. [Sandia National Labs., Livermore, CA (United States)
1993-12-31
Ab initio electronic-structure calculations are combined with empirical bond-additivity corrections to yield thermochemical properties of gas-phase molecules. A self-consistent set of heats of formation for molecules in the Si-H, Si-H-Cl, Si-H-F, Si-N-H and Si-N-H-F systems is presented, along with preliminary values for some Si-O-C-H species.
Slab2 - Providing updated subduction zone geometries and modeling tools to the community
Hayes, G. P.; Hearne, M. G.; Portner, D. E.; Borjas, C.; Moore, G.; Flamme, H.
2015-12-01
The U.S. Geological Survey database of global subduction zone geometries (Slab1.0) combines a variety of geophysical data sets (earthquake hypocenters, moment tensors, active source seismic survey images of the shallow subduction zone, bathymetry, trench locations, and sediment thickness information) to image the shape of subducting slabs in three dimensions, at approximately 85% of the world's convergent margins. The database is used extensively for a variety of purposes, from earthquake source imaging, to magnetotelluric modeling. Gaps in Slab1.0 exist where input data are sparse and/or where slabs are geometrically complex (and difficult to image with an automated approach). Slab1.0 also does not include information on the uncertainty in the modeled geometrical parameters, or the input data used to image them, and provides no means to reproduce the models it described. Currently underway, Slab2 will update and replace Slab1.0 by: (1) extending modeled slab geometries to all global subduction zones; (2) incorporating regional data sets that may describe slab geometry in finer detail than do previously used teleseismic data; (3) providing information on the uncertainties in each modeled slab surface; (4) modifying our modeling approach to a fully-three dimensional data interpolation, rather than following the 2-D to 3-D steps of Slab1.0; (5) migrating the slab modeling code base to a more universally distributable language, Python; and (6) providing the code base and input data we use to create our models, such that the community can both reproduce the slab geometries, and add their own data sets to ours to further improve upon those models in the future. In this presentation we describe our vision for Slab2, and the first results of this modeling process.
Energy Technology Data Exchange (ETDEWEB)
Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.
The hourly updated US High-Resolution Rapid Refresh (HRRR) storm-scale forecast model
Alexander, Curtis; Dowell, David; Benjamin, Stan; Weygandt, Stephen; Olson, Joseph; Kenyon, Jaymes; Grell, Georg; Smirnova, Tanya; Ladwig, Terra; Brown, John; James, Eric; Hu, Ming
2016-04-01
The 3-km convective-allowing High-Resolution Rapid Refresh (HRRR) is a US NOAA hourly updating weather forecast model that use a specially configured version of the Advanced Research WRF (ARW) model and assimilate many novel and most conventional observation types on an hourly basis using Gridpoint Statistical Interpolation (GSI). Included in this assimilation is a procedure for initializing ongoing precipitation systems from observed radar reflectivity data (and proxy reflectivity from lightning and satellite data), a cloud analysis to initialize stable layer clouds from METAR and satellite observations, and special techniques to enhance retention of surface observation information. The HRRR is run hourly out to 15 forecast hours over a domain covering the entire conterminous United States using initial and boundary conditions from the hourly-cycled 13km Rapid Refresh (RAP, using similar physics and data assimilation) covering North America and a significant part of the Northern Hemisphere. The HRRR is continually developed and refined at NOAA's Earth System Research Laboratory, and an initial version was implemented into the operational NOAA/NCEP production suite in September 2014. Ongoing experimental RAP and HRRR model development throughout 2014 and 2015 has culminated in a set of data assimilation and model enhancements that will be incorporated into the first simultaneous upgrade of both the operational RAP and HRRR that is scheduled for spring 2016 at NCEP. This presentation will discuss the operational RAP and HRRR changes contained in this upgrade. The RAP domain is being expanded to encompass the NAM domain and the forecast lengths of both the RAP and HRRR are being extended. RAP and HRRR assimilation enhancements have focused on (1) extending surface data assimilation to include mesonet observations and improved use of all surface observations through better background estimates of 2-m temperature and dewpoint including projection of 2-m temperature
Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...
Directory of Open Access Journals (Sweden)
B. Gantt
2015-05-01
Full Text Available Sea spray aerosols (SSA impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Despite their importance, the emission magnitude of SSA remains highly uncertain with global estimates varying by nearly two orders of magnitude. In this study, the Community Multiscale Air Quality (CMAQ model was updated to enhance fine mode SSA emissions, include sea surface temperature (SST dependency, and reduce coastally-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several regional and national observational datasets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for an inland site of the Bay Regional Atmospheric Chemistry Experiment (BRACE near Tampa, Florida. Including SST-dependency to the SSA emission parameterization led to increased sodium concentrations in the southeast US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex study period resulted in modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This SSA emission update enabled a more realistic simulation of the atmospheric chemistry in environments where marine air mixes with urban pollution.
An effective model for dynamic finite difference calculations
Energy Technology Data Exchange (ETDEWEB)
Dey, T.N.
1996-01-01
An effective stress model, which simulates the mechanical effects of pore fluids on deformation and strength of porous materials, is described. The model can directly use SESAME table equations-of-state (EOSs) for the solid and fluid components. the model assumes that undrained (no fluid flow) conditions occur. Elastic and crushing behavior of the pore space can be specified from the results of simple laboratory tests. The model fully couples deviatoric and volumetric behavior in the sense that deviatoric and tensile failure depend on the effective pressure, while volumetric changes caused by deviatoric failure are coupled back to the volumetric behavior of the material. Strain hardening and softening of the yield surface, together with a number of flow rules, can be modeled. This model has been implemented into the SMC123 and CTH codes.
Two-phase relative permeability models in reservoir engineering calculations
Energy Technology Data Exchange (ETDEWEB)
Siddiqui, S.; Hicks, P.J.; Ertekin, T.
1999-01-15
A comparison of ten two-phase relative permeability models is conducted using experimental, semianalytical and numerical approaches. Model predicted relative permeabilities are compared with data from 12 steady-state experiments on Berea and Brown sandstones using combinations of three white mineral oils and 2% CaCl1 brine. The model results are compared against the experimental data using three different criteria. The models are found to predict the relative permeability to oil, relative permeability to water and fractional flow of water with varying degrees of success. Relative permeability data from four of the experimental runs are used to predict the displacement performance under Buckley-Leverett conditions and the results are compared against those predicted by the models. Finally, waterflooding performances predicted by the models are analyzed at three different viscosity ratios using a two-dimensional, two-phase numerical reservoir simulator. (author)
Chemically reacting supersonic flow calculation using an assumed PDF model
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
Extraproximal approach to calculating equilibriums in pure exchange models
Antipin, A. S.
2006-10-01
Models of economic equilibrium are a powerful tool of mathematical modeling of various markets. However, according to many publications, there are as yet no universal techniques for finding equilibrium prices that are solutions to such models. A technique of this kind that is a natural implementation of the Walras idea of tatonnements (i.e., groping for equilibrium prices) is proposed, and its convergence is proved.
HEMCO v1.0: A versatile, ESMF-compliant component for calculating emissions in atmospheric models
Directory of Open Access Journals (Sweden)
C. A. Keller
2014-01-01
Full Text Available We describe the Harvard-NASA Emission Component version 1.0 (HEMCO, a stand-alone software component for computing emissions in global atmospheric models. HEMCO determines emissions from different sources, regions and species on a user-specified grid and can combine, overlay, and update a set of data inventories and scale factors, selected by the user from a data library through the HEMCO configuration file. New emission inventories at any spatial and temporal resolution are readily added to HEMCO and can be accessed by the user without any pre-processing of the data files or modification of the source code. Emissions that depend on dynamic source types and local environmental variables such as wind speed or surface temperature are calculated in separate HEMCO extensions. HEMCO is fully compliant with the Earth System Modeling Framework (ESMF environment. It is highly portable and can be deployed in a new model environment with only few adjustments at the top-level interface. So far, we have implemented HEMCO in the NASA GEOS-5 Earth System Model (ESM and in the GEOS-Chem chemical transport model (CTM. By providing a widely applicable framework for specifying constituent emissions, HEMCO is designed to ease sensitivity studies and model comparisons, as well as inverse modeling in which emissions are adjusted iteratively. The HEMCO code, extensions, and data libraries are available at http://wiki.geos-chem.org/HEMCO.
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
PACIAE 2.1: An Updated Issue of Parton and Hadron Cascade Model PACIAE 2.0
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; DONG; Bao-guo; CAI; Xu
2013-01-01
We have updated the parton and hadron cascade model PACIAE 2.0 to the new issue of PACIAE 2.1.The PACIAE model is based on PYTHIA.In the PYTHIA model,once the generated particle or parton transverse momentum pT is randomly sampled,the px and py components are originally put on the circle with radius pT randomly.Now,it is put
Carbon fiber dispersion models used for risk analysis calculations
1979-01-01
For evaluating the downwind, ground level exposure contours from carbon fiber dispersion, two fiber release scenarios were chosen. The first is the fire and explosion release in which all of the fibers are released instantaneously. This model applies to accident scenarios where an explosion follows a short-duration fire in the aftermath of the accident. The second is the plume release scenario in which the total mass of fibers is released into the fire plume. This model applies to aircraft accidents where only a fire results. These models are described in detail.
Mooring Model Experiment and Mooring Line Force Calculation
Institute of Scientific and Technical Information of China (English)
向溢; 谭家华; 杨建民; 张承懿
2001-01-01
Mooring model experiment and mooring line tension determination are of significance to the design of mooring systems and berthing structures. This paper mainly involves: (a) description and analysis of a mooring model experiment;(b) derivation of static equilibrium equations for a moored ship subjected to wind, current and waves; (c) solution of mo.oring equations with the Monte Carlo method; (d) qualitative analysis of effects of pier piles on mooring line forces. Special emphasis is placed on the derivation ofstatic equilibrium equations, solution method and the mooring model experiment.
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
, and then evaluates and documents the performance of this particular updating procedure for flow forecasting. A hypothetical case study and synthetic observations are used to illustrate how the Update method works and affects downstream nodes. A real case study in a 544 ha urban catchment furthermore shows...
Dekker, C. M.; Sliggers, C. J.
To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.
Mathematical model partitioning and packing for parallel computer calculation
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
The Martian Plasma Environment: Model Calculations and Observations
Lichtenegger, H. I. M.; Dubinin, E.; Schwingenschuh, K.; Riedler, W.
Based on a modified version of the model of an induced martian magnetosphere developed by Luhmann (1990), the dynamics and spatial distribution of different planetary ion species is examined. Three main regions are identified: A cloud of ions travelling along cycloidal trajectories, a plasma mantle and a plasma sheet. The latter predominantly consists of oxygen ions of ionospheric origin with minor portions of light particles. Comparison of model results with Phobos-2 observations shows reasonable agreement.
2013-01-01
Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743
An updated analytic model for the attenuation by the intergalactic medium
Inoue, Akio K; Iwata, Ikuru
2014-01-01
We present an updated version of the so-called Madau model for the attenuation by the intergalactic neutral hydrogen against the radiation from distant objects. First, we derive a distribution function of the intergalactic absorbers from the latest observational statistics of the Ly$\\alpha$ forest, Lyman limit systems, and damped Ly$\\alpha$ systems. The distribution function excellently reproduces the observed redshift evolutions of the Ly$\\alpha$ depression and of the mean-free-path of the Lyman continuum simultaneously. Then, we derive a set of the analytic functions which describe the mean intergalactic attenuation curve for objects at $z>0.5$. Our new model predicts, for some redshifts, more than 0.5--1 mag different attenuation magnitudes through usual broad-band filters relative to the original Madau model. Such a difference would cause uncertainty of the photometric redshift of 0.2, in particular, at $z\\simeq3$--4. Finally, we find a more than 0.5 mag overestimation of the Lyman continuum attenuation i...
Lattice location of dopant atoms: An -body model calculation
Indian Academy of Sciences (India)
N K Deepak
2010-03-01
The channelling and scattering yields of 1 MeV -particles in the $\\langle 1 0 0 \\rangle$, $\\langle 1 1 0 \\rangle and $\\langle 1 1 1 \\rangle$ directions of silicon implanted with bismuth and ytterbium have been simulated using -body model. The close encounter yield from dopant atoms in silicon is determined from the flux density, using the Bontemps and Fontenille method. All previous works reported in literature so far have been done with computer programmes using a statistical analytical expression or by a binary collision model or a continuum model. These results at the best gave only the transverse displacement of the lattice site from the concerned channelling direction. Here we applied the superior -body model to study the yield from bismuth in silicon. The finding that bismuth atom occupies a position close to the silicon substitutional site is new. The transverse displacement of the suggested lattice site from the channelling direction is consistent with the experimental results. The above model is also applied to determine the location of ytterbium in silicon. The present values show good agreement with the experimental results.
An hydrodynamic model for the calculation of oil spills trajectories
Energy Technology Data Exchange (ETDEWEB)
Paladino, Emilio Ernesto; Maliska, Clovis Raimundo [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Dinamica dos Fluidos Computacionais]. E-mails: emilio@sinmec.ufsc.br; maliska@sinmec.ufsc.br
2000-07-01
The aim of this paper is to present a mathematical model and its numerical treatment to forecast oil spills trajectories in the sea. The knowledge of the trajectory followed by an oil slick spilled on the sea is of fundamental importance in the estimation of potential risks for pipeline and tankers route selection, and in combating the pollution using floating barriers, detergents, etc. In order to estimate these slicks trajectories a new model, based on the mass and momentum conservation equations is presented. The model considers the spreading in the regimes when the inertial and viscous forces counterbalance gravity and takes into account the effects of winds and water currents. The inertial forces are considered for the spreading and the displacement of the oil slick, i.e., is considered its effects on the movement of the mass center of the slick. The mass loss caused by oil evaporation is also taken into account. The numerical model is developed in generalized coordinates, making the model easily applicable to complex coastal geographies. (author)
Gonzálvez, Alicia G; González Ureña, Ángel
2012-10-01
A laser spectroscopic technique is described that combines transmission and resonance-enhanced Raman inelastic scattering together with low laser power (view, a model for the Raman signal dependence on the sample thickness is also presented. Essentially, the model considers the sample to be homogeneous and describes the underlying physics using only three parameters: the Raman cross-section, the laser-radiation attenuation cross-section, and the Raman signal attenuation cross-section. The model was applied successfully to describe the sample-size dependence of the Raman signal in both β-carotene standards and carrot roots. The present technique could be useful for direct, fast, and nondestructive investigations in food quality control and analytical or physiological studies of animal and human tissues.
FEM Updating of the Heritage Court Building Structure
DEFF Research Database (Denmark)
Ventura, C. E.; Brincker, Rune; Dascotte, E.
2001-01-01
. The starting model of the structure was developed from the information provided in the design documentation of the building. Different parameters of the model were then modified using an automated procedure to improve the correlation between measured and calculated modal parameters. Careful attention......This paper describes results of a model updating study conducted on a 15-storey reinforced concrete shear core building. The output-only modal identification results obtained from ambient vibration measurements of the building were used to update a finite element model of the structure...... was placed to the selection of the parameters to be modified by the updating software in order to ensure that the necessary changes to the model were realistic and physically realisable and meaningful. The paper highlights the model updating process and provides an assessment of the usefulness of using...
A calculation model for the noise from steel railway bridges
Janssens, M.H.A.; Thompson, D.J.
1996-01-01
The sound level of a train crossing a steel railway bridge is usually about 10 dB higher than on plain track. In the Netherlands there are many such bridges which, for practical reasons, cannot be replaced by more intrinsically quiet concrete bridges. A computational model is described for the
A calculation model for the noise from steel railway bridges
Janssens, M.H.A.; Thompson, D.J.
1996-01-01
The sound level of a train crossing a steel railway bridge is usually about 10 dB higher than on plain track. In the Netherlands there are many such bridges which, for practical reasons, cannot be replaced by more intrinsically quiet concrete bridges. A computational model is described for the estim
Calculation of benchmarks with a shear beam model
Ferreira, D.
2015-01-01
Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010
Energy Technology Data Exchange (ETDEWEB)
Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro
2010-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
Badhwar-O'Neill 2011 Galactic Cosmic Ray Model Update and Future Improvements
O'Neill, Pat M.; Kim, Myung-Hee Y.
2014-01-01
The Badhwar-O'Neill Galactic Cosmic Ray (GCR) Model based on actual GR measurements is used by deep space mission planners for the certification of micro-electronic systems and the analysis of radiation health risks to astronauts in space missions. The BO GCR Model provides GCR flux in deep space (outside the earth's magnetosphere) for any given time from 1645 to present. The energy spectrum from 50 MeV/n-20 GeV/n is provided for ions from hydrogen to uranium. This work describes the most recent version of the BO GCR model (BO'11). BO'11 determines the GCR flux at a given time applying an empirical time delay function to past sunspot activity. We describe the GCR measurement data used in the BO'11 update - modern data from BESS, PAMELA, CAPRICE, and ACE emphasized for than the older balloon data used for the previous BO model (BO'10). We look at the GCR flux for the last 24 solar minima and show how much greater the flux was for the cycle 24 minimum in 2010. The BO'11 Model uses the traditional, steady-state Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration. It assumes a radially symmetrical diffusion coefficient derived from magnetic disturbances caused by sunspots carried onward by a constant solar wind. A more complex differential equation is now being tested to account for particle transport in the heliosphere in the next generation BO model. This new model is time-dependent (no longer a steady state model). In the new model, the dynamics and anti-symmetrical features of the actual heliosphere are accounted for so empirical time delay functions will no longer be required. The new model will be capable of simulating the more subtle features of modulation - such as the Sun's polarity and modulation dependence on the gradient and curvature drift. This improvement is expected to significantly improve the fidelity of the BO GCR model. Preliminary results of its
Energy Technology Data Exchange (ETDEWEB)
Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)
2013-02-15
The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Institute of Scientific and Technical Information of China (English)
陈建彬; 吕小强
2011-01-01
Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device＇ s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.
Update of an Object Oriented Track Reconstruction Model for LHC Experiments
Institute of Scientific and Technical Information of China (English)
DavidCandilin; SijinQIAN; 等
2001-01-01
In this update report about an Object Oriented (OO) track reconstruction model,which was presented at CHEP'97,CHEP'98,and CHEP'2000,we shall describe subsequent new developments since the beginning of year 2000.The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders.It has been coded in the C++ programming language originally for the CMS experiment at the future Large Hadron Collider (LHC) at CERN,and later has been successfully implemented into three different OO computing environments(including the level-2 trigger and offline software systems)of the ATLAS(another major experiment at LHC).For the level-2 trigger software environment.we shall selectively present some latest performance results(e.g.the B-physics event selection for ATLAS level-2 trigger,the robustness study result,ets.).For the offline environment,we shall present a new 3-D space point package which provides the essential offline input.A major development after CHEP'2000 is the implementation of the OO model into the new OO software frameworkAthena"of ATLAS experiment.The new modularization of this OO package enables the model to be more flexible and to be more easily implemented into different software environments.Also it provides the potential to handle the more comlpicated realistic situation(e.g.to include the calibration correction and the alignment correction,etc.) Some general interface issues(e.g.design of the common track class)of the algorithms to different framework environments have been investigated by using this OO package.
Nuclear model calculations and their role in space radiation research
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model
Energy Technology Data Exchange (ETDEWEB)
Taylor, G. A.; Hiergesell, R. A.
2013-11-12
The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow
An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans
Directory of Open Access Journals (Sweden)
Elizaveta Zabolotskikh
2014-03-01
Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
Energy Technology Data Exchange (ETDEWEB)
Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-10-31
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg
2013-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Sander, P Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Directory of Open Access Journals (Sweden)
P Martin Sander
Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
Recent updates in the aerosol component of the C-IFS model run by ECMWF
Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes
2017-04-01
The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern
An updated fracture-flow model for total-system performance assessment of Yucca Mountain
Energy Technology Data Exchange (ETDEWEB)
Gauthier, J.H. [SPECTRA Research Institute, Albuquerque, NM (United States)
1994-12-31
Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The {open_quotes}weeps model{close_quotes} now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive.
Institute of Scientific and Technical Information of China (English)
Chunying Zhang; Sun Chen; Fang Wu; Kai Song
2015-01-01
To overcome the large time-delay in measuring the hardness of mixed rubber, rheological parameters were used to predict the hardness. A novel Q-based model updating strategy was proposed as a universal platform to track time-varying properties. Using a few selected support samples to update the model, the strategy could dramat-ical y save the storage cost and overcome the adverse influence of low signal-to-noise ratio samples. Moreover, it could be applied to any statistical process monitoring system without drastic changes to them, which is practical for industrial practices. As examples, the Q-based strategy was integrated with three popular algorithms (partial least squares (PLS), recursive PLS (RPLS), and kernel PLS (KPLS)) to form novel regression ones, QPLS, QRPLS and QKPLS, respectively. The applications for predicting mixed rubber hardness on a large-scale tire plant in east China prove the theoretical considerations.
Indian Academy of Sciences (India)
J C Fu; M H Hsu; Y Duann
2016-02-01
Flood is the worst weather-related hazard in Taiwan because of steep terrain and storm. The tropical storm often results in disastrous flash flood. To provide reliable forecast of water stages in rivers is indispensable for proper actions in the emergency response during flood. The river hydraulic model based on dynamic wave theory using an implicit finite-difference method is developed with river roughness updating for flash flood forecast. The artificial neural network (ANN) is employed to update the roughness of rivers in accordance with the observed river stages at each time-step of the flood routing process. Several typhoon events at Tamsui River are utilized to evaluate the accuracy of flood forecasting. The results present the adaptive n-values of roughness for river hydraulic model that can provide a better flow state for subsequent forecasting at significant locations and longitudinal profiles along rivers.
Comparison of Hugoniots calculated for aluminum in the framework of three quantum-statistical models
Kadatskiy, Maxim A
2015-01-01
The results of calculations of thermodynamic properties of aluminum under shock compression in the framework of the Thomas--Fermi model, the Thomas--Fermi model with quantum and exchange corrections and the Hartree--Fock--Slater model are presented. The influences of the thermal motion and the interaction of ions are taken into account in the framework of three models: the ideal gas, the one-component plasma and the charged hard spheres. Calculations are performed in the pressure range from 1 to $10^7$ GPa. Calculated Hugoniots are compared with available experimental data.
Comparison of Hugoniots calculated for aluminum in the framework of three quantum-statistical models
Kadatskiy, M. A.; Khishchenko, K. V.
2015-11-01
The results of calculations of thermodynamic properties of aluminum under shock compression in the framework of the Thomas-Fermi model, the Thomas-Fermi model with quantum and exchange corrections and the Hartree-Fock-Slater model are presented. The influences of the thermal motion and the interaction of ions are taken into account in the framework of three models: the ideal gas, the one-component plasma and the charged hard spheres. Calculations are performed in the pressure range from 1 to 107 GPa. Calculated Hugoniots are compared with available experimental data.
Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)
Energy Technology Data Exchange (ETDEWEB)
Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-11-01
This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTrough analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m^{2} +/- $6/m^{2}. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m^{2} if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m^{2}, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
Update on single-screw expander geometry model integrated into an open-source simulation tool
Ziviani, D.; Bell, I. H.; De Paepe, M.; van den Broek, M.
2015-08-01
In this paper, a mechanistic steady-state model of a single-screw expander is described with emphasis on the geometric description. Insights into the calculation of the main parameters and the definition of the groove profile are provided. Additionally, the adopted chamber model is discussed. The model has been implemented by means of the open-source software PDSim (Positive Displacement SIMulation), written in the Python language, and the solution algorithm is described. The single-screw expander model is validated with a set of steady-state measurement points collected from a 11 kWe organic Rankine cycle test-rig with SES36 and R245fa as working fluid. The overall performance and behavior of the expander are also further analyzed.
Cholewa, Jason; Guimarães-Ferreira, Lucas; da Silva Teixeira, Tamiris; Naimo, Marshall Alan; Zhi, Xia; de Sá, Rafaele Bis Dal Ponte; Lodetti, Alice; Cardozo, Mayara Quadros; Zanchi, Nelo Eidy
2014-09-01
Human muscle hypertrophy brought about by voluntary exercise in laboratorial conditions is the most common way to study resistance exercise training, especially because of its reliability, stimulus control and easy application to resistance training exercise sessions at fitness centers. However, because of the complexity of blood factors and organs involved, invasive data is difficult to obtain in human exercise training studies due to the integration of several organs, including adipose tissue, liver, brain and skeletal muscle. In contrast, studying skeletal muscle remodeling in animal models are easier to perform as the organs can be easily obtained after euthanasia; however, not all models of resistance training in animals displays a robust capacity to hypertrophy the desired muscle. Moreover, some models of resistance training rely on voluntary effort, which complicates the results observed when animal models are employed since voluntary capacity is something theoretically impossible to measure in rodents. With this information in mind, we will review the modalities used to simulate resistance training in animals in order to present to investigators the benefits and risks of different animal models capable to provoke skeletal muscle hypertrophy. Our second objective is to help investigators analyze and select the experimental resistance training model that best promotes the research question and desired endpoints.
A very simple dynamic soil acidification model for scenario analyses and target load calculations
Posch, M.; Reinds, G.J.
2009-01-01
A very simple dynamic soil acidification model, VSD, is described, which has been developed as the simplest extension of steady-state models for critical load calculations and with an eye on regional applications. The model requires only a minimum set of inputs (compared to more detailed models) and
Germanas, D.; Stepšys, A.; Mickevičius, S.; Kalinauskas, R. K.
2017-06-01
This is a new version of the HOTB code designed to calculate three and four particle harmonic oscillator (HO) transformation brackets and their matrices. The new version uses the OpenMP parallel communication standard for calculations of harmonic oscillator transformation brackets. A package of Fortran code is presented. Calculation time of large matrices, orthogonality conditions and array of coefficients can be significantly reduced using effective parallel code. Other functionalities of the original code (for example calculation of single harmonic oscillator brackets) have not been modified.
Recent updates in the aerosol model of C-IFS and their impact on skill scores
Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier
2016-04-01
The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Services (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfates. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce coming upgrades. It will also present evaluations of these scores against AERONET observations. Next cycle of the C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. This modification has a negligible impact for most species except for black carbon and organic matter; it allows to close the budgets between sources and sinks in the diagnostics. Dust emissions have been tuned to favor the emissions of large particles, which were under-represented. This brought an overall decrease of the burden of dust aerosol and improved scores especially close to source regions. The biomass-burning aerosol emissions are now emitted at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). This brought a small increase in biomass burning aerosols, and a better representation of some large fire events. Lastly, SO2 emissions are now provided by the MACCity dataset instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset; the use of which brought significant improvements of the forecasts against observations. Upcoming upgrades of the aerosol model of C-IFS consist mainly in the overhaul of the representation of secondary aerosols. Secondary Organic Aerosols (SOA) production will be dynamically estimated by scaling them on CO fluxes. This approach has been
Directory of Open Access Journals (Sweden)
Alhassid Y.
2014-04-01
Full Text Available The shell model Monte Carlo (SMMC method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59−64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.
Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H
2014-01-01
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.
Energy Technology Data Exchange (ETDEWEB)
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Joris Mulder; Herbert Hoijtink; Christiaan de Leeuw
2012-01-01
This paper discusses a Fortran 90 program referred to as BIEMS (Bayesian inequality and equality constrained model selection) that can be used for calculating Bayes factors of multivariate normal linear models with equality and/or inequality constraints between the model parameters versus a model containing no constraints, which is referred to as the unconstrained model. The prior that is used under the unconstrained model is the conjugate expected-constrained posterior prior and the prior un...
Energy Technology Data Exchange (ETDEWEB)
Considine, D.B.; Douglass, A.R.; Jackman, C.H. [Applied Research Corp., Landover, MD (United States)]|[NASA, Goddard Space Flight Center, Greenbelt, MD (United States)
1995-02-01
The Goddard Space Flight Center (GSFC) two-dimensional model of stratospheric photochemistry and dynamics has been used to calculate the O3 response to stratospheric aircraft (high-speed civil transport (HSCT)) emissions. The sensitivity of the model O3 response was examined for systematic variations of five parameters and two reaction rates over a wide range, expanding on calculations by various modeling groups for the NASA High Speed Research Program and the World Meteorological Organization. In all, 448 model runs were required to test the effects of variations in the latitude, altitude, and magnetitude of the aircraft emissions perturbation, the background chlorine levels, the background sulfate aerosol surface area densities, and the rates of two key reactions. No deviation from previous conclusions concerning the response of O3 to HSCTs was found in this more exhaustive exploration of parameter space. Maximum O3 depletions occur for high-altitude, low altitude HSCT perturbations. Small increases in global total O3 can occur for low-altitude, high-altitude injections. Decreasing aerosol surface area densities and background chlorine levels increases the sensitivity of model O3 to the HSCT perturbations. The location of the aircraft emissions is the most important determinant of the model response. Response to the location of the HSCT emissions is not changed qualitatively by changes in background chlorine and aerosol loading. The response is also not very sensitive to changes in the rates of the reactions NO + HO2 yields NO2 + OH and HO2 + O3 yields OH + 2O2 over the limits of their respective uncertainties. Finally, levels of lower stratospheric HO(sub x) generally decrease when the HSCT perturbation is included, even though there are large increases in H2O due to the perturbation.
Directory of Open Access Journals (Sweden)
Michal Fusek
2016-11-01
Full Text Available Precipitation records from six stations of the Czech Hydrometeorological Institute were subject to statistical analysis with the objectives of updating the intensity–duration–frequency (IDF curves, by applying extreme value distributions, and comparing the updated curves against those produced by an empirical procedure in 1958. Another objective was to investigate differences between both sets of curves, which could be explained by such factors as different measuring instruments, measuring stations altitudes and data analysis methods. It has been shown that the differences between the two sets of IDF curves are significantly influenced by the chosen method of data analysis.
Energy Technology Data Exchange (ETDEWEB)
Augustine, C.
2011-10-01
The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
Pasyanos, M. E.; Masters, G.; Laske, G.; Ma, Z.
2012-12-01
Models such as CRUST2.0 (Bassin et al., 2000) have proven very useful to many seismic studies on regional, continental, and global scales. We have developed an updated, higher resolution model called LITHO1.0 that extends deeper to include the lithospheric lid, and includes mantle anisotropy, potentially making it more useful for a wider variety of applications. The model is evolving away from the crustal types strongly used in CRUST5.1 (Mooney et al., 1998) to a more data-driven model. This is accomplished by performing a targeted grid search with multiple data inputs. We seek to find the most plausible model which is able to fit multiple constraints, including updated sediment and crustal thickness models, upper mantle velocities derived from travel times, and surface wave dispersion. The latter comes from a new, very large, global surface wave dataset built using a new, efficient measurement technique that employs cluster analysis (Ma et al., 2012), and includes the group and phase velocities of both Love and Rayleigh waves. We will discuss datasets and methodology, highlight significant features of the model, and provide detailed information on the availability of the model in various formats.
Cooney, Gregory; Jamieson, Matthew; Marriott, Joe; Bergerson, Joule; Brandt, Adam; Skone, Timothy J
2017-01-17
The National Energy Technology Laboratory produced a well-to-wheels (WTW) life cycle greenhouse gas analysis of petroleum-based fuels consumed in the U.S. in 2005, known as the NETL 2005 Petroleum Baseline. This study uses a set of engineering-based, open-source models combined with publicly available data to calculate baseline results for 2014. An increase between the 2005 baseline and the 2014 results presented here (e.g., 92.4 vs 96.2 g CO2e/MJ gasoline, + 4.1%) are due to changes both in modeling platform and in the U.S. petroleum sector. An updated result for 2005 was calculated to minimize the effect of the change in modeling platform, and emissions for gasoline in 2014 were about 2% lower than in 2005 (98.1 vs 96.2 g CO2e/MJ gasoline). The same methods were utilized to forecast emissions from fuels out to 2040, indicating maximum changes from the 2014 gasoline result between +2.1% and -1.4%. The changing baseline values lead to potential compliance challenges with frameworks such as the Energy Independence and Security Act (EISA) Section 526, which states that Federal agencies should not purchase alternative fuels unless their life cycle GHG emissions are less than those of conventionally produced, petroleum-derived fuels.
Qiu, Lei; Yuan, Shenfang; Chang, Fu-Kuo; Bao, Qiao; Mei, Hanfei
2014-12-01
Structural health monitoring technology for aerospace structures has gradually turned from fundamental research to practical implementations. However, real aerospace structures work under time-varying conditions that introduce uncertainties to signal features that are extracted from sensor signals, giving rise to difficulty in reliably evaluating the damage. This paper proposes an online updating Gaussian Mixture Model (GMM)-based damage evaluation method to improve damage evaluation reliability under time-varying conditions. In this method, Lamb-wave-signal variation indexes and principle component analysis (PCA) are adopted to obtain the signal features. A baseline GMM is constructed on the signal features acquired under time-varying conditions when the structure is in a healthy state. By adopting the online updating mechanism based on a moving feature sample set and inner probability structural reconstruction, the probability structures of the GMM can be updated over time with new monitoring signal features to track the damage progress online continuously under time-varying conditions. This method can be implemented without any physical model of damage or structure. A real aircraft wing spar, which is an important load-bearing structure of an aircraft, is adopted to validate the proposed method. The validation results show that the method is effective for edge crack growth monitoring of the wing spar bolts holes under the time-varying changes in the tightness degree of the bolts.
Improvements on Calculation Model of Theoretical Combustion Temperature in a Blast Furnace
Institute of Scientific and Technical Information of China (English)
WU Sheng-li; LIU Cheng-song; FU Chang-liang; XU Jian; KOU Ming-yin
2011-01-01
On the basis of the existing originally modified calculation models of theoretical combustion temperature（TCT）,some factors,such as the combustion ratio of pulverized coal injection（PCI）,the decomposition heat of PCI and the heat consumption of SiO2 in ash reduced in high temperature environment,were amended and improved to put forward a more comprehensive model for calculating TCT.The influences of each improvement on TCT were studied and the results were analyzed compared with those of traditional model and originally modified model,which showed that the present model could reflect the thermal state of a hearth more effectively.
Reliability analysis and updating of deteriorating systems with subset simulation
DEFF Research Database (Denmark)
Schneider, Ronald; Thöns, Sebastian; Straub, Daniel
2017-01-01
Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...
Model operator approach to the Lamb shift calculations in relativistic many-electron atoms
Shabaev, V M; Yerokhin, V A
2013-01-01
A model operator approach to calculations of the QED corrections to energy levels in relativistic many-electron atomic systems is developed. The model Lamb shift operator is represented by a sum of local and nonlocal potentials which are defined using the results of ab initio calculations of the diagonal and nondiagonal matrix elements of the one-loop QED operator with H-like wave functions. The model operator can be easily included in any calculations based on the Dirac-Coulomb-Breit Hamiltonian. Efficiency of the method is demonstrated by comparison of the model QED operator results for the Lamb shifts in many-electron atoms and ions with exact QED calculations.
Fast and accurate calculation of dilute quantum gas using Uehling-Uhlenbeck model equation
Yano, Ryosuke
2017-02-01
The Uehling-Uhlenbeck (U-U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U-U model equation. DSMC analysis based on the U-U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U-U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green-Kubo expression and the shock layer of a dilute Bose gas around a cylinder.
Abu Husain, Nurulakmar; Haddad Khodaparast, Hamed; Ouyang, Huajiang
2012-10-01
Parameterisation in stochastic problems is a major issue in real applications. In addition, complexity of test structures (for example, those assembled through laser spot welds) is another challenge. The objective of this paper is two-fold: (1) stochastic uncertainty in two sets of different structures (i.e., simple flat plates, and more complicated formed structures) is investigated to observe how updating can be adequately performed using the perturbation method, and (2) stochastic uncertainty in a set of welded structures is studied by using two parameter weighting matrix approaches. Different combinations of parameters are explored in the first part; it is found that geometrical features alone cannot converge the predicted outputs to the measured counterparts, hence material properties must be included in the updating process. In the second part, statistical properties of experimental data are considered and updating parameters are treated as random variables. Two weighting approaches are compared; results from one of the approaches are in very good agreement with the experimental data and excellent correlation between the predicted and measured covariances of the outputs is achieved. It is concluded that proper selection of parameters in solving stochastic updating problems is crucial. Furthermore, appropriate weighting must be used in order to obtain excellent convergence between the predicted mean natural frequencies and their measured data.
Hot DA white dwarf model atmosphere calculations: Including improved Ni PI cross sections
Preval, S P; Badnell, N R; Hubeny, I; Holberg, J B
2016-01-01
To calculate realistic models of objects with Ni in their atmospheres, accurate atomic data for the relevant ionization stages needs to be included in model atmosphere calculations. In the context of white dwarf stars, we investigate the effect of changing the Ni {\\sc iv}-{\\sc vi} bound-bound and bound-free atomic data has on model atmosphere calculations. Models including PICS calculated with {\\sc autostructure} show significant flux attenuation of up to $\\sim 80$\\% shortward of 180\\AA\\, in the EUV region compared to a model using hydrogenic PICS. Comparatively, models including a larger set of Ni transitions left the EUV, UV, and optical continua unaffected. We use models calculated with permutations of this atomic data to test for potential changes to measured metal abundances of the hot DA white dwarf G191-B2B. Models including {\\sc autostructure} PICS were found to change the abundances of N and O by as much as $\\sim 22$\\% compared to models using hydrogenic PICS, but heavier species were relatively unaf...
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
DEFF Research Database (Denmark)
Yuan, Hao; You, Zhenjiang; Shapiro, Alexander
2013-01-01
Colloidal-suspension flow in porous media is modelled simultaneously by the large scale population balance equations and by the microscale network model. The phenomenological parameter of the correlation length in the population balance model is determined from the network modelling. It is found ...... out that the correlation length in the population balance model depends on the particle size. This dependency calculated by two-dimensional network has the same tendency as that obtained from the laboratory tests in engineered porous media....
Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR)
2013-06-01
performance in complex scenarios. Among these scenarios are ground penetrating radar and forward-looking radar for landmine and improvised explosive...Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) by Traian Dogaru ARL-TN-0548 June 2013...2013 Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) Traian Dogaru Sensors and Electron
Study on comparison of different methods to calculating sensitivity index of Jensen model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Real coded Accelerating Genetic Algorithm (RAGA), Chaos Algorithm (CA) were used to solve the sensitivity index of Jensen model which is one of models of crop water production function. After comparing with the outcome of Least Square Regression (LSR), the result showed that RAGA not only had high accuracy and more effective, but also saved calculating time. The authors provides new effective methods for calculating index of crop water production function.
Energy Technology Data Exchange (ETDEWEB)
Carlen, Ida; Nikolopoulos, Anna; Isaeus, Martin (AquaBiota Water Research, Stockholm (SE))
2007-06-15
GIS grids (maps) of marine parameters were created using point data from previous site investigations in the Forsmark and Oskarshamn areas. The proportion of global radiation reaching the sea bottom in Forsmark and Oskarshamn was calculated in ArcView, using Secchi depth measurements and the digital elevation models for the respective area. The number of days per year when the incoming light exceeds 5 MJ/m2 at the bottom was then calculated using the result of the previous calculations together with measured global radiation. Existing modelled grid-point data on bottom and pelagic temperature for Forsmark were interpolated to create surface covering grids. Bottom and pelagic temperature grids for Oskarshamn were calculated using point measurements to achieve yearly averages for a few points and then using regressions with existing grids to create new maps. Phytoplankton primary production in Forsmark was calculated using point measurements of chlorophyll and irradiance, and a regression with a modelled grid of Secchi depth. Distribution of biomass of macrophyte communities in Forsmark and Oskarshamn was calculated using spatial modelling in GRASP, based on field data from previous surveys. Physical parameters such as those described above were used as predictor variables. Distribution of biomass of different functional groups of fish in Forsmark was calculated using spatial modelling based on previous surveys and with predictor variables such as physical parameters and results from macrophyte modelling. All results are presented as maps in the report. The quality of the modelled predictions varies as a consequence of the quality and amount of the input data, the ecology and knowledge of the predicted phenomena, and by the modelling technique used. A substantial part of the variation is not described by the models, which should be expected for biological modelling. Therefore, the resulting grids should be used with caution and with this uncertainty kept in mind. All
Analytical approach to calculation of response spectra from seismological models of ground motion
Safak, Erdal
1988-01-01
An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Calculation Model for Current-voltage Relation of Silicon Quantum-dots-based Nano-memory
Institute of Scientific and Technical Information of China (English)
YANG Hong-guan; DAI Da-kang; YU Biao; SHANG Lin-lin; GUO You-hong
2007-01-01
Based on the capacitive coupling formalism, an analytic model for calculating the drain currents of the quantum-dots floating-gate memory cell is proposed. Using this model, one can calculate numerically the drain currents of linear, saturation and subthreshold regions of the device with/without charges stored on the floating dots. The read operation process of an n-channel Si quantum-dots floating-gate nano-memory cell is discussed after calculating the drain currents versus the drain to source voltages and control gate voltages in both high and low threshold states respectively.
A Model for the Calculation of Velocity Reduction Behind A Plane Fishing Net
Institute of Scientific and Technical Information of China (English)
GUI Fu-kun; LI Yu-cheng; ZHAO Yun-peng; DONG Guo-hai
2006-01-01
A model for the calculation of velocity reduction behind a fishing net is proposed in this paper. Comparisons are made between the calculated results and experimental data. It is shown that by the application of the effective adjacent area coefficient of fluid flowing around a solid structure to the fishing net, the calculated results agree well with the experimental data. The model proposed in this paper can also be applied to the analysis of the velocity reduction within a fishing cage and can be introduced into the numerical simulation of the hydrodynamic behavior of fishing cages for the improvement of computational accuracy.
Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Laboratory; Moller, Peter [Los Alamos National Laboratory; Wilson, William B [Los Alamos National Laboratory
2008-01-01
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Energy Technology Data Exchange (ETDEWEB)
Scheel, Joerg; Dib, Ramzi [Fachhochschule Giessen-Friedberg, Friedberg (Germany); Sassmannshausen, Achim [DB Energie GmbH, Frankfurt (Main) (Germany). Arbeitsgebiet Bahnstromleitungen Energieerzeugungs- und Uebertragungssysteme; Riedl, Markus [Eon Netz GmbH, Bayreuth (Germany). Systemtechnik Leitungen
2010-12-13
Increasingly, high-temperature cables are used in high-voltage grids. Beyond a given temperature level, their slack span cannot be calculated accurately by conventional simple linear methods. The contribution investigates the behaviour of composite cables at high operating temperatures and its influence on the slack span and presents a more accurate, bilingual calculation method. (orig.)
Comparison of results of experimental research with numerical calculations of a model one-sided seal
Directory of Open Access Journals (Sweden)
Joachimiak Damian
2015-06-01
Full Text Available Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Discovering Plate Boundaries Update: Builds Content Knowledge and Models Inquiry-based Learning
Sawyer, D. S.; Pringle, M. S.; Henning, A. T.
2009-12-01
Discovering Plate Boundaries (DPB) is a jigsaw-structured classroom exercise in which students explore the fundamental datasets from which plate boundary processes were discovered. The exercise has been widely used in the past ten years as a classroom activity for students in fifth grade through high school, and for Earth Science major and general education courses in college. Perhaps more importantly, the exercise has been used extensively for professional development of in-service and pre-service K-12 science teachers, where it simultaneously builds content knowledge in plate boundary processes (including natural hazards), models an effective data-rich, inquiry-based pedagogy, and provides a set of lesson plans and materials which teachers can port directly into their own classroom (see Pringle, et al, this session for a specific example). DPB is based on 4 “specialty” data maps, 1) earthquake locations, 2) modern volcanic activity, 3) seafloor age, and 4) topography and bathymetry, plus a fifth map of (undifferentiated) plate boundary locations. The jigsaw is structured so that students are first split into one of the four “specialties,” then re-arranged into groups with each of the four specialties to describe the boundaries of a particular plate. We have taken the original DPB materials, used the latest digital data sets to update all the basic maps, and expanded the opportunities for further student and teacher learning. The earthquake maps now cover the recent period including the deadly Banda Aceh event. The topography/bathymetry map now has global coverage and uses ice-free elevations, which can, for example, extend to further inquiry about mantle viscosity and loading processes (why are significant portions of the bedrock surface of Greenland and Antarctica below sea level?). The volcanic activity map now differentiates volcano type and primary volcanic lithology, allowing a more elaborate understanding of volcanism at different plate boundaries
Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.
2015-01-01
The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.
Calculation of energy spectrum of $^{12}$C isotope with modified Yukawa potential by cluster models
Indian Academy of Sciences (India)
MOHAMMAD REZA SHOJAE; NAFISEH ROSHAN BAKHT
2016-10-01
In this paper, we have calculated the energy spectrum of 12C isotope in two-cluster models, $3\\alpha$ cluster model and $^8$Be + $\\alpha$ cluster model. We use the modified Yukawa potential for interaction between theclusters and solve the Schrödinger equation using Nikiforov–Uvarov method to calculate the energy spectrum. Then, we increase the accuracy by adding spin-orbit coupling and tensor force and solve them by perturbationtheory in both models. Finally, the calculated results for both models are compared with each other and with the experimental data. The results show that the isotope $^{12}$C should be considered as a three-$\\alpha$ cluster and themodified Yukawa potential is adaptable for cluster interactions.
Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto
2017-09-08
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.
Multi-Scale Thermohydrologic Model Sensitivity-Study Calculations in Support of the SSPA
Energy Technology Data Exchange (ETDEWEB)
Glascoe, L G; Buscheck, T A; Loosmore, G A; Sun, Y
2001-12-20
The purpose of this calculation report is to document the thermohydrologic (TH) model calculations performed for the Supplemental Science and Performance Analysis (SSPA), Volume 1, Section 5 and Volume 2 (BSC 2001d [DIRS 155950], BSC 2001e [DIRS 154659]). The calculations are documented here in accordance with AP-3.12Q REV0 ICN4 [DIRS 154418]. The Technical Working Plan (Twp) for this document is TWP-NGRM-MD-000015 Real. These TH calculations were primarily conducted using three model types: (1) the Multiscale Thermohydrologic (MSTH) model, (2) the line-averaged-heat-source, drift-scale thermohydrologic (LDTH) model, and (3) the discrete-heat-source, drift-scale thermal (DDT) model. These TH-model calculations were conducted to improve the implementation of the scientific conceptual model, quantify previously unquantified uncertainties, and evaluate how a lower-temperature operating mode (LTOM) would affect the in-drift TH environment. Simulations for the higher-temperature operating mode (HTOM), which is similar to the base case analyzed for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) (CRWMS M&O 2000j [DIRS 153246]), were also conducted for comparison with the LTOM. This Calculation Report describes (1) the improvements to the MSTH model that were implemented to reduce model uncertainty and to facilitate model validation, and (2) the sensitivity analyses conducted to better understand the influence of parameter and process uncertainty. The METHOD Section (Section 2) describes the improvements to the MSTH-model methodology and submodels. The ASSUMPTIONS Section (Section 3) lists the assumptions made (e.g., boundaries, material properties) for this methodology. The USE OF SOFTWARE Section (Section 4) lists the software, routines and macros used for the MSTH model and submodels supporting the SSPA. The CALCULATION Section (Section 5) lists the data used in the model and the manner in which the MSTH model is prepared and executed. And
Advanced model for the calculation of meshing forces in spur gear planetary transmissions
Iglesias Santamaría, Miguel; Fernández del Rincón, Alfonso; Juan de Luna, Ana Magdalena de; Díez Ibarbia, Alberto; García Fernández, Pablo; Viadero Rueda, Fernando
2015-01-01
This paper presents a planar spur gear planetary transmission model, describing in great detail aspects such as the geometric definition of geometric overlaps and the contact forces calculation, thus facilitating the reproducibility of results by fellow researchers. The planetary model is based on a mesh model already used by the authors in the study of external gear ordinary transmissions. The model has been improved and extended to allow for the internal meshing simulation, taking into cons...
Institute of Scientific and Technical Information of China (English)
曹国辉; 胡佳星; 张锴
2016-01-01
The calculation model for the relaxation loss of concrete mentioned in the Code for Design of Highway Reinforced Concrete and Prestressed Concrete Bridges and Culverts(JTG D62—2004) wasmodified according to experimental data. Time-varying relaxation loss was considered in the new model. Moreover, prestressed reinforcement with varying lengths (caused by the shrinkage and creep of concrete) might influence the final values and the time-varying function of the forecast relaxation loss. Hence, the effects of concrete shrinkage and creep were considered when calculating prestress loss, which reflected the coupling relation between these effects and relaxation loss in concrete. Hence, the forecast relaxation loss of prestressed reinforcement under the effects of different initial stress levels at any time point can be calculated using the modified model. To simplify the calculation, the integral expression of the model can be changed into an algebraic equation. The accuracy of the result is related to the division of the periods within the ending time of deriving the final value of the relaxation loss of prestressed reinforcement. When the time division is reasonable, result accuracy is high. The modified model works excellently according to the comparison of the test results. The calculation result of the modified model mainly reflects the prestress loss values of prestressed reinforcement at each time point, which confirms that adopting the finding in practical applications is reasonable.
A musculoskeletal lumbar and thoracic model for calculation of joint kinetics in the spine
Energy Technology Data Exchange (ETDEWEB)
Kim, Yong Cheol; Ta, Duc manh; Koo, Seung Bum [Chung-Ang University, Seoul (Korea, Republic of); Jung Moon Ki [AnyBody Technology A/S, Aalborg (Denmark)
2016-06-15
The objective of this study was to develop a musculoskeletal spine model that allows relative movements in the thoracic spine for calculation of intra-discal forces in the lumbar and thoracic spine. The thoracic part of the spine model was composed of vertebrae and ribs connected with mechanical joints similar to anatomical joints. Three different muscle groups around the thoracic spine were inserted, along with eight muscle groups around the lumbar spine in the original model from AnyBody. The model was tested using joint kinematics data obtained from two normal subjects during spine flexion and extension, axial rotation and lateral bending motions beginning from a standing posture. Intra-discal forces between spine segments were calculated in a musculoskeletal simulation. The force at the L4-L5 joint was chosen to validate the model's prediction against the lumbar model in the original AnyBody model, which was previously validated against clinical data.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-A
Energy Technology Data Exchange (ETDEWEB)
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank A and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) A, and provide a sample analysis of SST-A tank based on analysis of record (AOR) loads. The SST-A model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-SX
Energy Technology Data Exchange (ETDEWEB)
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-S
Energy Technology Data Exchange (ETDEWEB)
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank S and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) S, and provide a sample analysis of SST-S tank based on analysis of record (AOR) loads. The SST-S model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
Ferraro, Vittorio; Marinelli, Valerio; Mele, Marilena
2013-04-01
It is known that the best predictions of sky luminances are obtainable by the CIE 15 standard skies model, but the predictions by this model need knowledge of the measured luminance distributions themselves, since a criterion for selecting the type of sky starting from the irradiance values has not found until now. The authors propose a new simple method of applying the CIE model, based on the use of the sky index Si. A comparison between calculated luminance data and data measured in Arcavacata of Rende (Italy), Lyon (France) and Pamplona (Spain) show a good performance of this method in comparison with other methods of calculation of luminance existing in the literature.
Power Loss Calculation and Thermal Modelling for a Three Phase Inverter Drive System
Directory of Open Access Journals (Sweden)
Z. Zhou
2005-12-01
Full Text Available Power losses calculation and thermal modelling for a three-phase inverter power system is presented in this paper. Aiming a long real time thermal simulation, an accurate average power losses calculation based on PWM reconstruction technique is proposed. For carrying out the thermal simulation, a compact thermal model for a three-phase inverter power module is built. The thermal interference of adjacent heat sources is analysed using 3D thermal simulation. The proposed model can provide accurate power losses with a large simulation time-step and suitable for a long real time thermal simulation for a three phase inverter drive system for hybrid vehicle applications.
Mathematical Models For Calculating The Value Of Dynamic Viscosity Of A Liquid
Directory of Open Access Journals (Sweden)
Ślęzak M.
2015-06-01
Full Text Available The objective of this article is to review models for calculating the value of liquid dynamic viscosity. Issues of viscosity and rheological properties of liquid ferrous solutions are important from the perspective of modelling, along with the control of actual production processes related to the manufacturing of metals, including iron and steel. Conducted analysis within literature indicates that there are many theoretical considerations concerning the effect of viscosity of liquid metals solutions. The vast majority of models constitute a group of theoretical or semi-empirical equations, where thermodynamic parameters of solutions, or some parameters determined by experimental methods, are used for calculations of the dynamic viscosity coefficient.
A Direct Calculation of Critical Exponents of Two-Dimensional Anisotropic Ising Model
Institute of Scientific and Technical Information of China (English)
XIONG Gang; WANG Xiang-Rong
2006-01-01
Using an exact solution of the one-dimensional quantum transverse-field Ising model, we calculate the critical exponents of the two-dimensional anisotropic classicalIsing model (IM). We verify that the exponents are the same as those of isotropic classical IM. Our approach provides an alternative means of obtaining and verifying these well-known results.
vanVlimmeren, BAC; Fraaije, JGEM
1996-01-01
We present a simple method for the numerical calculation of the noise distribution in multicomponent functional Langevin models. The topic is of considerable importance, in view of the increased interest in the application of mesoscopic dynamics simulation models to phase separation of complex
Corresponding-States and Parachor Models for the Calculation of Interfacial Tensions
DEFF Research Database (Denmark)
Zuo, You-Xiang; Stenby, Erling Halfdan
1997-01-01
-states model. The two models were tested on 86 pure substances, more than 30 binary and multicomponent mixtures, 11 naphtha reformate cuts, 6 petroleum cuts and 2 North Sea oil mixtures. The calculated results were found to be in good agreement with experimental data.Keywords: corresponding-states, parachor...
Benchmark calculation of no-core Monte Carlo shell model in light nuclei
Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062
2011-01-01
The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.
Development of a risk-based mine closure cost calculation model
CSIR Research Space (South Africa)
Du Plessis, A
2006-06-01
Full Text Available The study summarised in this paper focused on expanding existing South African mine closure cost calculation models to provide a new model that incorporates risks, which could have an effect on the closure costs during the life cycle of the mine...
Considerations on the Mathematical model for Calculating the Single-phase Grounding
Directory of Open Access Journals (Sweden)
TATAI Ildiko
2013-05-01
Full Text Available In this paper are presented the results obtained using a mathematical model, conceived in order to analyze the effects of grounding faults that occur in a medium voltage network. Measurements were made on a real electric network. Calculated results using the mathematical model are compared with the actual measurements.
Jensen, L; van Duijnen, PT; Snijders, JG
2003-01-01
A discrete solvent reaction field model for calculating frequency-dependent molecular linear response properties of molecules in solution is presented. The model combines a time-dependent density functional theory (QM) description of the solute molecule with a classical (MM) description of the discr
The problem of margin calculation and its reduction via the p-median problem model
Goldengorin, B.; Krushynskyi, D.; Kuzmenko, V.; Mastorakis, NE; Demiralp, M; Mladenov,; Bojkovic, Z
2009-01-01
The paper deals with a model for calculation of the regulatory margin on brokerage accounts. The model is based on the p-Median problem (PMP) that is known to be NP-hard. We use a pseudo-Boolean representation of the PMP and propose several problem size reduction and preprocessing techniques. Our co
Diameter structure modeling and the calculation of plantation volume of black poplar clones
Directory of Open Access Journals (Sweden)
Andrašev Siniša
2004-01-01
Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.
Microscopic interacting boson model calculations for even–even 128−138Ce nuclei
Indian Academy of Sciences (India)
Nurettin Turkan; Ismail Maras
2007-05-01
In this study, we determined the most appropriate Hamiltonian that is needed for the present calculations of energy levels and (2) values of 128−138Ce nuclei which have a mass around ≅ 130 using the interacting boson model (IBM). Using the best-ﬁtted values of parameters in the Hamiltonian of the IBM-2, we have calculated energy levels and (2) values for a number of transitions in 128,130,132,134,136,138Ce. The results were compared with the previous experimental and theoretical (PTSM model) data and it was observed that they are in good agreement. Also some predictions of this model have better accuracy than those of PTSM model. It has turned out that the interacting boson approximation (IBA) is fairly reliable for calculating spectra in the entire set of 128,130,132,134,136,138Ce isotopes and the quality of the ﬁts presented in this paper is acceptable.
DEVELOPMENT OF CALCULATING MODEL APPLICABLE FOR CYLINDER WALL DYNAMIC HEAT TRANSFER
Institute of Scientific and Technical Information of China (English)
ZHONG Minjun; SHI Tielin
2007-01-01
In the calculation of submarine air conditioning load of the early stage, the obtained heat is regarded as cooling load. The confusion of the two words causing the cooling load figured out is abnormally high, and the change of air conditioning cooling load can not be indicated. In accordance with submarine structure and heat transfer characteristics of its inner components, Laplace transformation to heat conduction differential equation of cylinder wall is carried out. The dynamic calculation of submarine conditioning load based on this model is also conducted, and the results of calculation are compared with those of static cooling load calculation. It is concluded that the dynamic cooling load calculation methods can illustrate the change of submarine air conditioning cooling load more accurate than the static one.
SPH calculations of asteroid disruptions: The role of pressure dependent failure models
Jutzi, Martin
2015-01-01
We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz&Asphaug (1999). However, for many cases (e.g. initially partly or fully damaged targets, rubble-pile structures, etc.) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold $Q^*_{D}$ as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without frict...
A model for calculating heat transfer coefficient concerning ethanol-water mixtures condensation
Wang, J. S.; Yan, J. J.; Hu, S. H.; Yang, Y. S.
2010-03-01
The attempt of the author in this research is made to calculate a heat transfer coefficient (HTC) by combining the filmwise theory with the dropwise notion for ethanol-water mixtures condensation. A new model, including ethanol concentration, vapor pressure and velocity, is developed by introducing a characteristic coefficient to combine the two mentioned-above theories. Under different concentration, pressure and velocity, the calculation is in comparison with experiment. It turns out that the calculation value is in good agreement with the experimental result; the maximal error is within ±30.1%. In addition, the model is applied to calculate related experiment in other literature and the values obtained agree well with results in reference.
Institute of Scientific and Technical Information of China (English)
ZHANG Zhi-jie; LIU Yu-hua; L(U) Zhong-yuan; LI Ze-sheng
2009-01-01
The rotational isomeric state(RIS) model was constructed for poly(vinylidene chloride)(PVDC) based on quantum chemistry calculations. The statistical weighted parameters were obtained from RIS representations and ab initio energies of conformers for model molecules 2,2,4,4-tetrachloropentane(TCP) and 2,2,4,4,6, 6-hexachlorohep-tane(HCH). By employing the RIS method, the characteristic ratio C∞ was calculated for PVDC. The calculated cha-racteristic ratio for PVDC is in good agreement with experiment result. Additionally, we studied the influence of the statistical weighted parameters on C∞ by calculating δC∞/δlnw. According to the values of δC∞/δlnw, the effects of second-order Cl-CH2 pentane type interaction and Cl-Cl long range interaction on C∞ were found to be important. In contrast, first-order interaction is unimportant.
PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0
Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu
2013-05-01
We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with
Kaminski, George A.; Ponomarev, Sergei Y.; Liu, Aibing B.
2009-01-01
We are presenting POSSIM (POlarizable Simulations with Second order Interaction Model) – a software package and a set of parameters designed for molecular simulations. The key feature of POSSIM is that the electrostatic polarization is taken into account using a previously introduced fast formalism. This permits cutting computational cost of using the explicit polarization by about an order of magnitude. In this article, parameters for water, methane, ethane, propane, butane, methanol and NMA are introduced. These molecules are viewed as model systems for protein simulations. We have achieved our goal of ca. 0.5 kcal/mol accuracy for gas-phase dimerization energies and no more than 2% deviations in liquid state heats of vaporization and densities. Moreover, free energies of hydration of the polarizable methane, ethane and methanol have been calculated using the statistical perturbation theory. These calculations are a model for calculating protein pKa shifts and ligand binding affinities. The free energies of hydration were found to be 2.12 kcal/mol, 1.80 kcal/mol and −4.95 kcal/mol for methane, ethane and methanol, respectively. The experimentally determined literature values are 1.91 kcal/mol, 1.83 kcal/mol and −5.11 kcal/mol. The POSSIM average error in these absolute free energies of hydration is only about 0.13 kcal/mol. Using the statistical perturbation theory with polarizable force fields is not widespread, and we believe that this work opens road to further development of the POSSIM force field and its applications for obtaining accurate energies in protein-related computer modeling. PMID:20209038
Directory of Open Access Journals (Sweden)
Joris Mulder
2012-01-01
Full Text Available This paper discusses a Fortran 90 program referred to asBIEMS (Bayesian inequality and equality constrained model selection that can be used for calculating Bayes factors of multivariate normal linear models with equality and/or inequality constraints betweenthe model parameters versus a model containing no constraints, which is referred to as the unconstrained model. The prior that is used under the unconstrained model is the conjugate expected-constrained posterior prior and the prior under the constrained model is proportional to the unconstrained prior truncated in the constrained space. This results in Bayes factors that appropriately balance between model t and complexity for a broad class of constrained models. When the set of equality and/or inequality constraints in the model represents a hypothesis that applied researchers have in, for instance, (MAN(COVA, (multivariate regression, or repeated measurements, the obtained Bayes factor can be used to determine how much evidence is provided by the data in favor of the hypothesis in comparison to the unconstrained model. If several hypotheses are underinvestigation, the Bayes factors between the constrained models can be calculated using the obtained Bayes factors from BIEMS. Furthermore, posterior model probabilities of constrained models are provided which allows the user to compare the models directlywith each other.
Updated SAO OMI formaldehyde retrieval
Directory of Open Access Journals (Sweden)
G. González Abad
2014-01-01
Full Text Available We present and discuss the Smithsonian Astrophysical Observatory (SAO formaldehyde (H2CO retrieval algorithm for the Ozone Monitoring Instrument (OMI which is the operational retrieval for NASA OMI H2CO. The version of the algorithm described here includes relevant changes with respect to the operational one, including differences in the reference spectra for H2CO, the fit of O2-O2 collisional complex, updates in the high resolution solar reference spectrum, the use of a model reference sector over the remote Pacific Ocean to normalize the retrievals, an updated Air Mass Factor (AMF calculation scheme, and the inclusion of scattering weights and vertical H2CO profile in the level 2 products. The theoretical basis of the retrieval is discussed in detail. Typical values for retrieved vertical columns are between 4 × 1015 and 4 × 1016 molecules cm−2 with typical fitting uncertainties ranging between 40% and 100%. In high concentration regions the errors are usually reduced to 30%. The detection limit is estimated at 3 × 1015 molecules cm−2. These updated retrievals are compared with previous ones.
Olexová, Lucia; Talarovičová, Alžbeta; Lewis-Evans, Ben; Borbélyová, Veronika; Kršková, Lucia
2012-12-01
Research on autism has been gaining more and more attention. However, its aetiology is not entirely known and several factors are thought to contribute to the development of this neurodevelopmental disorder. These potential contributing factors range from genetic heritability to environmental effects. A significant number of reviews have already been published on different aspects of autism research as well as focusing on using animal models to help expand current knowledge around its aetiology. However, the diverse range of symptoms and possible causes of autism have resulted in as equally wide variety of animal models of autism. In this update article we focus only on the animal models with neurobehavioural characteristics of social deficit related to autism and present an overview of the animal models with alterations in brain regions, neurotransmitters, or hormones that are involved in a decrease in sociability.
Townsend, Molly T; Sarigul-Klijn, Nesrin
2016-01-01
Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.
Theoretical Calculation of Rotational Bands of 179Pt in the Particle-Triaxial-Rotor Model
Institute of Scientific and Technical Information of China (English)
CHEN Guo-Jie; SONG Hui-Chao; LIU Yu-Xin
2005-01-01
Theoretical calculations have been performed for nucleus 179Pt in the particle-triaxial-rotor model with variable moment of inertia. The obtained energy spectrum agrees with the experimental data quite well. The calculated results indicate that the bands 1/2- and 7/2+ are triaxial deformation bands and originate mainly from the v[521]1/2- and v[633]7/2+ configurations respectively.
Calculation of Energy Levels of Nucleus 127I in the Particle-Triaxial-Rotor Model
Institute of Scientific and Technical Information of China (English)
SONG Hui-Chao; LIU Yu-Xin; ZHANG Yu-Hu
2004-01-01
@@ Theoretical calculations have been performed for nucleus 127 I in the framework of the particle-triaxial-rotor model.The calculated results indicate that both the 5+2 and 7+2 bands are oblate deformed bands. Their configurations are associated with the πd5/2 [402] 52 and πg7/2[404] 72 orbitals and the strong mixing between them. Meanwhile a possible explanation of the strong mixing is given.
2009-10-01
Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Spatial calculating analysis model research of land-use change in urban fringe districts
Institute of Scientific and Technical Information of China (English)
2008-01-01
The spatial calculating analysis model is based on GIS overlay. It compartmental-izes the research district land into three spatial parts: unchanged part,converted part and increased part. By this method we can evaluate the numerical model and dynamic degree model for existing calculating changing speed of land-use. Fur-thermore the paper raises reviving the calculating analysis model of spatial infor-mation in order to predict the dynamic changing level of all sorts of land. More concretely speaking,the model is mainly to know the changing area and changing speed (increased or decreased) of different land classifications from the micro-cosmic angle and to clearly show the spatial distribution and spatio-temporal law for changing urban lands. We discover why the situation has taken place by com-bining social and economic conditions. The result indicates that the calculating analysis model of spatial information can derive more accurate procedure of spatial transference and increase of all kinds of land from the microcosmic angle. By this model and technology,we can make the research of spatio-temporal structure evolution in land-use be more systematical and deeper. The result will benefit the planning management of urban land-use of developed districts in China in the fu-ture.
Memory updating and mental arithmetic
Directory of Open Access Journals (Sweden)
Cheng-Ching eHan
2016-02-01
Full Text Available Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating numerical information following addition (MUcalc could predict the performance of MMM, perhaps owing to common elements between the task and MMM. In Experiment 2, new updating tasks were designed to clarify this: a spatial updating task with no numbers, a numerical task with no calculation, and a word task. The results showed that both MUcalc and the spatial task were able to predict the performance of MMM but only with the more difficult problems, while other updating tasks did not predict performance. It is concluded that relevant processes involved in updating the contents of working memory support mental arithmetic in adults.
AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS
Institute of Scientific and Technical Information of China (English)
LiYanxing; HuXinkang; ShuaiPing; ZhangZhongfu
2003-01-01
The propagation path of satellite signals in the atmosphere is a curve thus it,is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°,the accuracy of the correction exceeds 0.06mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z＞50°,the correction is smaller than 0.5 mm and can be neglected. When Z＞50°, the correction must be made. When Z is 85°, 88° and 89° , the corrections are 198mm, 8.911m and 28.497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80°, but too small when Z=89°. The expression in this paper is applicable to any satellite.
AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS
Institute of Scientific and Technical Information of China (English)
Li Yanxing; Hu Xinkang; Shuai Ping; Zhang Zhongfu
2003-01-01
The propagation path of satellite signals in the atmosphere is a curve thus it.is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°, the accuracy of the correction exceeds 0.06 mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z＞50°,the correction is smaller than 0.5 mm and can be neglected.When Z＞50°, the correction must be made. When Z is 85° , 88° and 89° , the corrections are 198mm, 8. 911 m and 28. 497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80 °, but too small when Z=89°. The expression in this paper is applicable to any satellite.
Oláh, Julianna; van Bergen, Laura; De Proft, Frank; Roos, Goedele
2015-01-01
Protein thiol/sulfenic acid oxidation potentials provide a tool to select specific oxidation agents, but are experimentally difficult to obtain. Here, insights into the thiol sulfenylation thermodynamics are obtained from model calculations on small systems and from a quantum mechanics/molecular mechanics (QM/MM) analysis on human 2-Cys peroxiredoxin thioredoxin peroxidase B (Tpx-B). To study thiol sulfenylation in Tpx-B, our recently developed computational method to determine reduction potentials relatively compared to a reference system and based on reaction energies reduction potential from electronic energies is updated. Tpx-B forms a sulfenic acid (R-SO(-)) on one of its active site cysteines during reactive oxygen scavenging. The observed effect of the conserved active site residues is consistent with the observed hydrogen bond interactions in the QM/MM optimized Tpx-B structures and with free energy calculations on small model systems. The ligand effect could be linked to the complexation energies of ligand L with CH3S(-) and CH3SO(-). Compared to QM only calculations on Tpx-B's active site, the QM/MM calculations give an improved understanding of sulfenylation thermodynamics by showing that other residues from the protein environment other than the active site residues can play an important role.
Energy Technology Data Exchange (ETDEWEB)
Cliffe, K.A.; Morris, S.T.; Porter, J.D. [AEA Technology, Harwell (United Kingdom)
1998-05-01
NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised.
Energy Technology Data Exchange (ETDEWEB)
Sheinman, Y.; Rosen, A. (Technion-Israel Inst. of Tech., Haifa (Israel). Faculty of Aerospace Engineering)
1991-01-01
A new model for performance calculations of grid-connected horizontal axis wind turbines is presented. This model takes into account the important dynamic characteristics of the various components comprising the turbine system, including rotor, gear-box, generator, shafts, couplings and brakes, and the grid. There is a special effort to obtain an appropriate balance between efficiency and accuracy. The model is modular and thus offers an easy implementation of new sub-models for new components, or changing of existing sub-models. The complete model of the wind turbine system is nonlinear and thus complicated. Linearization of this model leads to an eigenvalue problem that helps in understanding the dynamic characteristics of the turbine. A special reduction technique helps in reducing the size of the model and as a result increasing the model efficiency without practically decreasing its accuracy for performance calculations. (author).
Mean-Field Calculations for the Three-Dimensional Holstein Model
Institute of Scientific and Technical Information of China (English)
罗强; 刘川
2002-01-01
The electron-phonon Holstein model is studied in three spatial dimensions. It is argued that this model can be used to account for major features of the high-To BaPb1-xBixO3 and BaxK1-xBiO3 systems. Mean-field calculations are performed via a path integral representation of the model. Charge-density-wave order parameters and transition temperatures are obtained.