Use of results from microscopic methods in optical model calculations
International Nuclear Information System (INIS)
Lagrange, C.
1985-11-01
A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr
Model films of cellulose. I. Method development and initial results
Gunnars, S.; Wågberg, L.; Cohen Stuart, M.A.
2002-01-01
This report presents a new method for the preparation of thin cellulose films. NMMO (N- methylmorpholine- N-oxide) was used to dissolve cellulose and addition of DMSO (dimethyl sulfoxide) was used to control viscosity of the cellulose solution. A thin layer of the cellulose solution is spin- coated
1-g model loading tests: methods and results
Czech Academy of Sciences Publication Activity Database
Feda, Jaroslav
1999-01-01
Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering
Energy Technology Data Exchange (ETDEWEB)
Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-03-01
In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.
A method for modeling laterally asymmetric proton beamlets resulting from collimation
Energy Technology Data Exchange (ETDEWEB)
Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)
2015-03-15
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.
A method for modeling laterally asymmetric proton beamlets resulting from collimation
International Nuclear Information System (INIS)
Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark
2015-01-01
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets
A method for modeling laterally asymmetric proton beamlets resulting from collimation
Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.
2015-01-01
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287
International Nuclear Information System (INIS)
Maerker, R.E.; Worley, B.A.
1989-01-01
Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis
International Nuclear Information System (INIS)
Kumagai, Hiromichi
1999-01-01
To prevent the expansion of the tube damage and to maintain structural integrity in the steam generators (SGs) of fast breeder reactors (FBRs), it is necessary to detect precisely and immediately the leakage of water from heat transfer tubes. Therefore, an active acoustic method was developed. Previous studies have revealed that in practical steam generators the active acoustic method can detect bubbles of 10 l/s within 10 seconds. To prevent the expansion of damage to neighboring tubes, it is necessary to detect smaller leakages of water from the heat transfer tubes. The Doppler method is designed to detect small leakages and to find the source of the leak before damage spreads to neighboring tubes. To evaluate the relationship between the detection sensitivity of the Doppler method and the bubble volume and bubble size, the structural shapes and bubble flow conditions were investigated experimentally, using a small structural model. The results show that the Doppler method can detect the bubbles under bubble flow conditions, and it is sensitive enough to detect small leakages within a short time. The doppler method thus has strong potential for the detection of water leakage in SGs. (author)
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Alekseenko, M. A.; Gendrina, I. Yu.
2017-11-01
Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1
Atmospheric Deposition Modeling Results
U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...
DEFF Research Database (Denmark)
Marschler, Christian; Sieber, Jan; Berkemer, Rainer
2014-01-01
We introduce a general formulation for an implicit equation-free method in the setting of slow-fast systems. First, we give a rigorous convergence result for equation-free analysis showing that the implicitly defined coarse-level time stepper converges to the true dynamics on the slow manifold...... against the direction of traffic. Equation-free analysis enables us to investigate the behavior of the microscopic traffic model on a macroscopic level. The standard deviation of cars' headways is chosen as the macroscopic measure of the underlying dynamics such that traveling wave solutions correspond...... to equilibria on the macroscopic level in the equation-free setup. The collapse of the traffic jam to the free flow then corresponds to a saddle-node bifurcation of this macroscopic equilibrium. We continue this bifurcation in two parameters using equation-free analysis....
Hoxha Gezim; Shala Ahmet; Likaj Rame
2017-01-01
The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Cr...
Directory of Open Access Journals (Sweden)
Hoxha Gezim
2017-11-01
Full Text Available The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Crash software and Virtual Crash are shown in tabular graphics and compared in mathematical methods.
Out-of-pile and in-pile temperature noise investigations: a survey of methods results and models
International Nuclear Information System (INIS)
Dentico, G.; Giovannini, R.; Marseguerra, M.; Pacilio, N.; Taglienti, S.; Tosi, V.; Vigo, A.; Oguma, R.
1982-01-01
A review is given of the main results obtained from temperature noise measurements performed in out-of-pile sodium loops on fast fuel element mock-ups. Sources of data were thermocouples placed in the central axis of the channel downstream from the bundle end. Autoregressive moving average (ARMA) models have been applied to several temperature time series; the analysis shows that a simple ARMA (3, 2) model adequately accounts for the observed fluctuations. Finally, highlights of a heat transfer stochastic model are also reported together with a preliminary validation against in-pile experimental data. (author)
Cerveri, P; Lopomo, N; Pedotti, A; Ferrigno, G
2005-03-01
In the field of 3D reconstruction of human motion from video, model-based techniques have been proposed to increase the estimation accuracy and the degree of automation. The feasibility of this approach is strictly connected with the adopted biomechanical model. Particularly, the representation of the kinematic chain and the assessment of the corresponding parameters play a relevant role for the success of the motion assessment. In this paper, the focus is on the determination of the kinematic parameters of a general hand skeleton model using surface measurements. A novel method that integrates nonrigid sphere fitting and evolutionary optimization is proposed to estimate the centers and the functional axes of rotation of the skeletal joints. The reliability of the technique is tested using real movement data and simulated motions with known ground truth 3D measurement noise and different ranges of motion (RoM). With respect to standard nonrigid sphere fitting techniques, the proposed method performs 10-50% better in the best condition (very low noise and wide RoM) and over 100% better with physiological artifacts and RoM. Repeatability in the range of a couple of millimeters, on the localization of the centers of rotation, and in the range of one degree, on the axis directions is obtained from real data experiments.
Bordogna, Clelia María; Albano, Ezequiel V.
2007-02-01
The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.
International Nuclear Information System (INIS)
Bordogna, Clelia Maria; Albano, Ezequiel V
2007-01-01
The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work
Ding, Chuan; Wang, Kaihong; Huang, Xiaoying
2014-01-01
In a distribution channel, channel members are not always self-interested, but altruistic in some conditions. Based on this assumption, this paper adopts a behavior game method to analyze and forecast channel members’ decision behavior based on result fairness preference and reciprocal fairness preference by embedding a fair preference theory in channel research of coordination. The behavior game forecasts that a channel can achieve coordination if channel members consider behavior elemen...
International Nuclear Information System (INIS)
Barton, C.C.; Larsen, E.; Page, W.R.; Howard, T.M.
1993-01-01
Fractures have been characterized for fluid-flow, geomechanical, and paleostress modeling at three localities in the vicinity of drill hole USW G-4 at Yucca Mountain in southwestern Nevada. A method for fracture characterization is introduced that integrates mapping fracture-trace networks and quantifying eight fracture parameters: trace length, orientation, connectivity, aperture, roughness, shear offset, trace-length density, and mineralization. A complex network of fractures was exposed on three 214- to 260-m 2 pavements cleared of debris in the upper lithophysal unit of the Tiva Canyon Member of the Miocene Paint-brush Tuff. The pavements are two-dimensional sections through the three-dimensional network of strata-bound fractures. All fractures with trace lengths greater than 0.2 m were mapped and studied
Energy Technology Data Exchange (ETDEWEB)
Gauger, Thomas [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE); Stuttgart Univ. (Germany). Inst. of Navigation; Haenel, Hans-Dieter; Roesemann, Claus [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE)
2008-09-15
The report on the implementation of the UNECE convention on long-range transboundary air pollution Pt.1, deposition loads (methods, modeling and mapping results, trends) includes the following chapters: Introduction, deposition on air pollutants used for the input for critical loads in exceeding calculations, methods applied for mapping total deposition loads, mapping wet deposition, wet deposition mapping results, mapping dry deposition, dry deposition mapping results, cloud and fog mapping results, total deposition mapping results, modeling the air concentration of acidifying components and heavy metals, agricultural emissions of acidifying and eutrophying species.
Trojan Horse Method: Recent Results
International Nuclear Information System (INIS)
Pizzone, R. G.; Spitaleri, C.
2008-01-01
Owing the presence of the Coulomb barrier at astrophysically relevant kinetic energies, it is very difficult, or sometimes impossible to measure astrophysical reaction rates in laboratory. This is why different indirect techniques are being used along with direct measurements. The THM is unique indirect technique allowing one measure astrophysical rearrangement reactions down to astrophysical relevant energies. The basic principle and a review of the main application of the Trojan Horse Method are presented. The applications aiming at the extraction of the bare S b (E) astrophysical factor and electron screening potentials U e for several two body processes are discussed
Bourgeois, F
2001-01-01
Electrical modeling and simulation of the LHC magnet strings are being used to determine the key parameters that are needed for the design of the powering and energy extraction equipment. Poles and zeros of the Laplace expression approximating the Bode plot of the measured coil impedance are used to synthesize an R/L/C model of the magnet. Subsequently, this model is used to simulate the behavior of the LHC main dipole magnet string. Lumped transmission line behavior, impedance, resonance, propagation of the power supply ripple, ramping errors, energy extraction transients and their damping are presented in this paper. (3 refs).
Explorative methods in linear models
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2004-01-01
The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....
Energy Technology Data Exchange (ETDEWEB)
Cester, Francesco; Deitenbeck, Helmuth; Kuentzel, Matthias; Scheuer, Josef; Voggenberger, Thomas
2015-04-15
The overall objective of the project is to develop a general simulation environment for program systems used in reactor safety analysis. The simulation environment provides methods for graphical modeling and evaluation of results for the simulation models. The terms of graphical modeling and evaluation of results summarize computerized methods of pre- and postprocessing for the simulation models, which can assist the user in the execution of the simulation steps. The methods comprise CAD (''Computer Aided Design'') based input tools, interactive user interfaces for the execution of the simulation and the graphical representation and visualization of the simulation results. A particular focus was set on the requirements of the system code ATHLET. A CAD tool was developed that allows the specification of 3D geometry of the plant components and the discretization with a simulation grid. The system provides inter-faces to generate the input data of the codes and to export the data for the visualization software. The CAD system was applied for the modeling of a cooling circuit and reactor pressure vessel of a PWR. For the modeling of complex systems with many components, a general purpose graphical network editor was adapted and expanded. The editor is able to simulate networks with complex topology graphically by suitable building blocks. The network editor has been enhanced and adapted to the modeling of balance of plant and thermal fluid systems in ATHLET. For the visual display of the simulation results in the local context of the 3D geometry and the simulation grid, the open source program ParaView is applied, which is widely used for 3D visualization of field data, offering multiple options for displaying and ana-lyzing the data. New methods were developed, that allow the necessary conversion of the results of the reactor safety codes and the data of the CAD models. The trans-formed data may then be imported into ParaView and visualized. The
International Nuclear Information System (INIS)
Kiviranta, Sauli; Saarinen, Hannu; Maekinen, Harri; Krassi, Boris
2011-01-01
A full scale physical test facility, DTP2 (Divertor Test Platform 2) has been established in Finland for demonstrating and refining the Remote Handling (RH) equipment designs for ITER. The first prototype RH equipment at DTP2 is the Cassette Multifunctional Mover (CMM) equipped with Second Cassette End Effector (SCEE) delivered to DTP2 in October 2008. The purpose is to prove that CMM/SCEE prototype can be used successfully for the 2nd cassette RH operations. At the end of F4E grant 'DTP2 test facility operation and upgrade preparation', the RH operations of the 2nd cassette were successfully demonstrated to the representatives of Fusion For Energy (F4E). Due to its design, the CMM/SCEE robot has relatively large mechanical flexibilities when the robot carries the nine-ton-weighting 2nd Cassette on the 3.6-m long lever. This leads into a poor absolute accuracy and into the situation where the 3D model, which is used in the control system, does not reflect the actual deformed state of the CMM/SCEE robot. To improve the accuracy, the new method has been developed in order to handle the flexibilities within the control system's virtual environment. The effect of the load on the CMM/SCEE has been measured and minimized in the load compensation model, which is implemented in the control system software. The proposed method accounts for the structural deformations of the robot in the control system through the 3D model morphing by utilizing the finite element method (FEM) analysis for morph targets. This resulted in a considerable improvement of the CMM/SCEE absolute accuracy and the adequacy of the 3D model, which is crucially important in the RH applications, where the visual information of the controlled device in the surrounding environment is limited.
Model Correction Factor Method
DEFF Research Database (Denmark)
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
Multivariate analysis: models and method
International Nuclear Information System (INIS)
Sanz Perucha, J.
1990-01-01
Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis
Results of the naive quark model
International Nuclear Information System (INIS)
Gignoux, C.
1987-10-01
The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr
International Nuclear Information System (INIS)
Mahaffy, J.H.; Liles, D.R.; Bott, T.F.
1981-01-01
The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents
Iterative method for Amado's model
International Nuclear Information System (INIS)
Tomio, L.
1980-01-01
A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt
The WOMBAT Attack Attribution Method: Some Results
Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier
In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.
ADOxx Modelling Method Conceptualization Environment
Directory of Open Access Journals (Sweden)
Nesat Efendioglu
2017-04-01
Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.
ALTMANN, BERTHOLD
AFTER A BRIEF SUMMARY OF THE TEST PROGRAM (DESCRIBED MORE FULLY IN LI 000 318), THE STATISTICAL RESULTS TABULATED AS OVERALL "ABC (APPROACH BY CONCEPT)-RELEVANCE RATIOS" AND "ABC-RECALL FIGURES" ARE PRESENTED AND REVIEWED. AN ABSTRACT MODEL DEVELOPED IN ACCORDANCE WITH MAX WEBER'S "IDEALTYPUS" ("DIE OBJEKTIVITAET…
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1991-01-01
Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data
International Nuclear Information System (INIS)
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-01-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
German precursor study: methods and results
International Nuclear Information System (INIS)
Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.
1985-01-01
This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study
Mechanics of Nanostructures: Methods and Results
Ruoff, Rod
2003-03-01
We continue to develop and use new tools to measure the mechanics and electromechanics of nanostructures. Here we discuss: (a) methods for making nanoclamps and the resulting: nanoclamp geometry, chemical composition and type of chemical bonding, and nanoclamp strength (effectiveness as a nanoclamp for the mechanics measurements to be made); (b) mechanics of carbon nanocoils. We have received carbon nanocoils from colleagues in Japan [1], measured their spring constants, and have observed extensions exceeding 100% relative to the unloaded length, using our scanning electron microscope nanomanipulator tool; (c) several new devices that are essentially MEMS-based, that allow for improved measurements of the mechanics of psuedo-1D and planar nanostructures. [1] Zhang M., Nakayama Y., Pan L., Japanese J. Appl. Phys. 39, L1242-L1244 (2000).
Engineering model cryocooler test results
International Nuclear Information System (INIS)
Skimko, M.A.; Stacy, W.D.; McCormick, J.A.
1992-01-01
This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads
Multiband discrete ordinates method: formalism and results
International Nuclear Information System (INIS)
Luneville, L.
1998-06-01
The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author)
Models and methods in thermoluminescence
International Nuclear Information System (INIS)
Furetta, C.
2005-01-01
This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)
Models and methods in thermoluminescence
Energy Technology Data Exchange (ETDEWEB)
Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)
2005-07-01
This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)
Burgess, P. M.; Steel, R. J.
2016-12-01
Decoding a history of Earth's surface dynamics from strata requires robust quantitative understanding of supply and accommodation controls. The concept of stratigraphic solution sets has proven useful in this decoding, but application and development of this approach has so far been surprisingly limited. Stratal control volumes, areas and trajectories are new approaches defined here, building on previous ideas about stratigraphic solution sets, to help analyse and understand the sedimentary record of Earth surface dynamics. They may have particular application reconciling results from outcrop and subsurface analysis with results from analogue and numerical experiments. Stratal control volumes are sets of points in a three-dimensional volume, with axes of subsidence, sediment supply and eustatic rates of change, populated with probabilities derived from analysis of subsidence, supply and eustasy timeseries (Figure 1). These empirical probabilities indicate the likelihood of occurrence of any particular combination of control rates defined by any point in the volume. The stratal control volume can then by analysed to determine which parts of the volume represent relative sea-level fall and rise, where in the volume particular stacking patterns will occur, and how probable those stacking patterns are. For outcrop and subsurface analysis, using a stratal control area with eustasy and subsidence combined on a relative sea-level axis allows similar analysis, and may be preferable. A stratal control trajectory is a history of supply and accommodation creation rates, interpreted from outcrop or subsurface data, or observed in analogue and numerical experiments, and plotted as a series of linked points forming a trajectory through the stratal control volume (Figure 1) or area. Three examples are presented, one from outcrop and two theoretical. Much work remains to be done to build a properly representative database of stratal controls, but careful comparison of stratal
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
DEFF Research Database (Denmark)
Deroba, J. J.; Butterworth, D. S.; Methot, R. D.
2015-01-01
The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments to dat...
Project Oriented Immersion Learning: Method and Results
DEFF Research Database (Denmark)
Icaza, José I.; Heredia, Yolanda; Borch, Ole M.
2005-01-01
A pedagogical approach called “project oriented immersion learning” is presented and tested on a graduate online course. The approach combines the Project Oriented Learning method with immersion learning in a virtual enterprise. Students assumed the role of authors hired by a fictitious publishing...... house that develops digital products including e-books, tutorials, web sites and so on. The students defined the problem that their product was to solve; choose the type of product and the content; and built the product following a strict project methodology. A wiki server was used as a platform to hold...
Learning phacoemulsification. Results of different teaching methods.
Directory of Open Access Journals (Sweden)
Hennig Albrecht
2004-01-01
Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.
Network modelling methods for FMRI.
Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W
2011-01-15
There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.
RESULTS OF THE QUESTIONNAIRE: ANALYSIS METHODS
Staff Association
2014-01-01
Five-yearly review of employment conditions Article S V 1.02 of our Staff Rules states that the CERN “Council shall periodically review and determine the financial and social conditions of the members of the personnel. These periodic reviews shall consist of a five-yearly general review of financial and social conditions;” […] “following methods […] specified in § I of Annex A 1”. Then, turning to the relevant part in Annex A 1, we read that “The purpose of the five-yearly review is to ensure that the financial and social conditions offered by the Organization allow it to recruit and retain the staff members required for the execution of its mission from all its Member States. […] these staff members must be of the highest competence and integrity.” And for the menu of such a review we have: “The five-yearly review must include basic salaries and may include any other financial or soc...
Dahlerup-Petersen, K
2001-01-01
Summary form only given, as follows. A long chain of superconducting magnets represents a complex load impedance for the powering and turns into a complex generator during the energy extraction. Detailed information about the circuit is needed for the calculation of a number of parameters and features, which are of vital importance for the choice of powering and extraction equipment and for the prediction of the circuit performance under normal and fault conditions. Constitution of the complex magnet chain impedance is based on a synthesized, electrical model of the basic magnetic elements. This is derived from amplitude and phase measurements of coil and ground impedances from d.c. to 50 kHz and the identification of poles and zeros of the impedance and transfer functions. An electrically compatible RLC model of each magnet type was then synthesized by means of a combination of conventional algorithms. Such models have been elaborated for the final, 15-m long LHC dipole (both apertures in series) as well as ...
Two different hematocrit detection methods: Different methods, different results?
Directory of Open Access Journals (Sweden)
Schuepbach Reto A
2010-03-01
Full Text Available Abstract Background Less is known about the influence of hematocrit detection methodology on transfusion triggers. Therefore, the aim of the present study was to compare two different hematocrit-assessing methods. In a total of 50 critically ill patients hematocrit was analyzed using (1 blood gas analyzer (ABLflex 800 and (2 the central laboratory method (ADVIA® 2120 and compared. Findings Bland-Altman analysis for repeated measurements showed a good correlation with a bias of +1.39% and 2 SD of ± 3.12%. The 24%-hematocrit-group showed a correlation of r2 = 0.87. With a kappa of 0.56, 22.7% of the cases would have been transfused differently. In the-28%-hematocrit group with a similar correlation (r2 = 0.8 and a kappa of 0.58, 21% of the cases would have been transfused differently. Conclusions Despite a good agreement between the two methods used to determine hematocrit in clinical routine, the calculated difference of 1.4% might substantially influence transfusion triggers depending on the employed method.
Linkage of PRA models. Phase 1, Results
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.; Knudsen, J.K.; Kelly, D.L.
1995-12-01
The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.
Linkage of PRA models. Phase 1, Results
International Nuclear Information System (INIS)
Smith, C.L.; Knudsen, J.K.; Kelly, D.L.
1995-12-01
The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Diverse methods for integrable models
Fehér, G.
2017-01-01
This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.
Visual Display of Scientific Studies, Methods, and Results
Saltus, R. W.; Fedi, M.
2015-12-01
The need for efficient and effective communication of scientific ideas becomes more urgent each year.A growing number of societal and economic issues are tied to matters of science - e.g., climate change, natural resource availability, and public health. Societal and political debate should be grounded in a general understanding of scientific work in relevant fields. It is difficult for many participants in these debates to access science directly because the formal method for scientific documentation and dissemination is the journal paper, generally written for a highly technical and specialized audience. Journal papers are very effective and important for documentation of scientific results and are essential to the requirements of science to produce citable and repeatable results. However, journal papers are not effective at providing a quick and intuitive summary useful for public debate. Just as quantitative data are generally best viewed in graphic form, we propose that scientific studies also can benefit from visual summary and display. We explore the use of existing methods for diagramming logical connections and dependencies, such as Venn diagrams, mind maps, flow charts, etc., for rapidly and intuitively communicating the methods and results of scientific studies. We also discuss a method, specifically tailored to summarizing scientific papers that we introduced last year at AGU. Our method diagrams the relative importance and connections between data, methods/models, results/ideas, and implications/importance using a single-page format with connected elements in these four categories. Within each category (e.g., data) the spatial location of individual elements (e.g., seismic, topographic, gravity) indicates relative novelty (e.g., are these new data?) and importance (e.g., how critical are these data to the results of the paper?). The goal is to find ways to rapidly and intuitively share both the results and the process of science, both for communication
Interpreting Results from the Multinomial Logit Model
DEFF Research Database (Denmark)
Wulff, Jesper
2015-01-01
This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...
Immersive visualization of dynamic CFD model results
International Nuclear Information System (INIS)
Comparato, J.R.; Ringel, K.L.; Heath, D.J.
2004-01-01
With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)
Variational methods in molecular modeling
2017-01-01
This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...
Results of steel containment vessel model test
International Nuclear Information System (INIS)
Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.
1998-05-01
A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1993-01-01
This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases
The EURAD model: Design and first results
International Nuclear Information System (INIS)
1989-01-01
The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Modelling rainfall erosion resulting from climate change
Kinnell, Peter
2016-04-01
It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.
INTRAVAL test case 1b - modelling results
International Nuclear Information System (INIS)
Jakob, A.; Hadermann, J.
1991-07-01
This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs
Performance of various mathematical methods for calculation of radioimmunoassay results
International Nuclear Information System (INIS)
Sandel, P.; Vogt, W.
1977-01-01
Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de
Coherence method of identifying signal noise model
International Nuclear Information System (INIS)
Vavrin, J.
1981-01-01
The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)
PALEOEARTHQUAKES IN THE PRIBAIKALIE: METHODS AND RESULTS OF DATING
Directory of Open Access Journals (Sweden)
Oleg P. Smekalin
2010-01-01
Full Text Available In the Pribaikalie and adjacent territories, seismogeological studies have been underway for almost a half of the century and resulted in discovery of more than 70 dislocations of seismic or presumably seismic origin. With commencement of paleoseismic studies, dating of paleo-earthquakes was focused on as an indicator useful for long-term prediction of strong earthquakes. V.P. Solonenko [Solonenko, 1977] distinguished five methods for dating paleoseismogenic deformations, i.e. geological, engineering geological, historico-archeological, dendrochronological and radiocarbon methods. However, ages of the majority of seismic deformations, which were subject to studies at the initial stage of development of seismogeology in Siberia, were defined by methods of relative or correlation age determination.Since the 1980s, studies of seismogenic deformation in the Pribaikalie have been widely conducted with trenching. Mass sampling, followed with radiocarbon analyses and definition of absolute ages of paleo-earthquakes, provided new data on seismic regimes of the territory and rates of and recent displacements along active faults, and enhanced validity of methods of relative dating, in particular morphometry. Capacities of the morphometry method has significantly increased with introduction of laser techniques in surveys and digital processing of 3D relief models.Comprehensive seismogeological studies conducted in the Pribaikalie revealed 43 paleo-events within 16 seismogenic structures. Absolute ages of 18 paleo-events were defined by the radiocarbon age determination method. Judging by their ages, a number of dislocations were related with historical earthquakes which occurred in the 18th and 19th centuries, yet any reliable data on epicenters of such events are not available. The absolute and relative dating methods allowed us to identify sections in some paleoseismogenic structures by differences in ages of activation and thus provided new data for
Discussion of gas trade model (GTM) results
International Nuclear Information System (INIS)
Manne, A.
1989-01-01
This is in response to your invitation to comment on the structure of GTM and also upon the differences between its results and those of other models participating in EMF9. First a word upon the structure. GTM was originally designed to provide both regional and sectoral detail within the North American market for natural gas at a single point in time, e.g. the year 2000. It is a spatial equilibrium model in which a solution is obtained by maximizing a nonlinear function, the sum of consumers and producers surplus. Since transport costs are included in producers cost, this formulation automatically ensures that geographical price differentials will not differ by more than transport costs. For purposes of EMF9, GTM was modified to allow for resource development and depletion over time
The Danish national passenger model – Model specification and results
DEFF Research Database (Denmark)
Rich, Jeppe; Hansen, Christian Overgaard
2016-01-01
The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...
Superconducting solenoid model magnet test results
Energy Technology Data Exchange (ETDEWEB)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab
2006-08-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.
Superconducting solenoid model magnet test results
International Nuclear Information System (INIS)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab
2006-01-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Scale Model Thruster Acoustic Measurement Results
Vargas, Magda; Kenny, R. Jeremy
2013-01-01
The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.
CMS standard model Higgs boson results
Directory of Open Access Journals (Sweden)
Garcia-Abia Pablo
2013-11-01
Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.
Modelling Extortion Racket Systems: Preliminary Results
Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria
Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.
New results in the Dual Parton Model
International Nuclear Information System (INIS)
Van, J.T.T.; Capella, A.
1984-01-01
In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Global Optimization Ensemble Model for Classification Methods
Directory of Open Access Journals (Sweden)
Hina Anwar
2014-01-01
Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2015-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2012-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2015-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant§ refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
EQUITY SHARES EQUATING THE RESULTS OF FCFF AND FCFE METHODS
Directory of Open Access Journals (Sweden)
Bartłomiej Cegłowski
2012-06-01
Full Text Available The aim of the article is to present the method of establishing equity shares in weight average cost of capital (WACC, in which the value of loan capital results from the fixed assumptions accepted in the financial plan (for example a schedule of loan repayment and own equity is evaluated by means of a discount method. The described method causes that, regardless of whether cash flows are calculated as FCFF or FCFE, the result of the company valuation will be identical.
New method of scoliosis assessment: preliminary results using computerized photogrammetry.
Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio
2011-09-01
A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.
Engineering Glass Passivation Layers -Model Results
Energy Technology Data Exchange (ETDEWEB)
Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.
2011-08-08
The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan
Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP
Directory of Open Access Journals (Sweden)
F. Pattyn
2012-05-01
Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.
CIEMAT model results for Esthwaite Water
International Nuclear Information System (INIS)
Aguero, A.; Garcia-Olivares, A.
2000-01-01
This study used the transfer model PRYMA-LO, developed by CIEMAT-IMA, Madrid, Spain, to simulate the transfer of Cs-137 in watershed scenarios. The main processes considered by the model include: transfer of the fallout to the ground, incorporation of the fallout radioisotopes into the water flow, and their removal from the system. The model was tested against observation data obtained in water and sediments of Esthwaite Water, Lake District, UK. This comparison made it possible to calibrate the parameters of the model to the specific scenario
Analytical methods used at model facility
International Nuclear Information System (INIS)
Wing, N.S.
1984-01-01
A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy
Convergence results for a class of abstract continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2004-03-01
Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.
Energy Technology Data Exchange (ETDEWEB)
Stephen A. Holditch; Emrys Jones
2002-09-01
In 2000, Chevron began a project to learn how to characterize the natural gas hydrate deposits in the deepwater portions of the Gulf of Mexico. A Joint Industry Participation (JIP) group was formed in 2001, and a project partially funded by the U.S. Department of Energy (DOE) began in October 2001. The primary objective of this project is to develop technology and data to assist in the characterization of naturally occurring gas hydrates in the deepwater Gulf of Mexico. These naturally occurring gas hydrates can cause problems relating to drilling and production of oil and gas, as well as building and operating pipelines. Other objectives of this project are to better understand how natural gas hydrates can affect seafloor stability, to gather data that can be used to study climate change, and to determine how the results of this project can be used to assess if and how gas hydrates act as a trapping mechanism for shallow oil or gas reservoirs. As part of the project, three workshops were held. The first was a data collection workshop, held in Houston during March 14-15, 2002. The purpose of this workshop was to find out what data exist on gas hydrates and to begin making that data available to the JIP. The second and third workshop, on Geoscience and Reservoir Modeling, and Drilling and Coring Methods, respectively, were held simultaneously in Houston during May 9-10, 2002. The Modeling Workshop was conducted to find out what data the various engineers, scientists and geoscientists want the JIP to collect in both the field and the laboratory. The Drilling and Coring workshop was to begin making plans on how we can collect the data required by the project's principal investigators.
Life cycle analysis of electricity systems: Methods and results
International Nuclear Information System (INIS)
Friedrich, R.; Marheineke, T.
1996-01-01
The two methods for full energy chain analysis, process analysis and input/output analysis, are discussed. A combination of these two methods provides the most accurate results. Such a hybrid analysis of the full energy chains of six different power plants is presented and discussed. The results of such analyses depend on time, site and technique of each process step and, therefore have no general validity. For renewable energy systems the emissions form the generation of a back-up system should be added. (author). 7 figs, 1 fig
Flying Training Capacity Model: Initial Results
National Research Council Canada - National Science Library
Lynch, Susan
2005-01-01
OBJECTIVE: (1) Determine the flying training capacity for 6 bases: * Sheppard AFB * Randolph AFB * Moody AFB * Columbus AFB * Laughlin AFB * Vance AFB * (2) Develop versatile flying training capacity simulation model for AETC...
Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela
2014-04-22
In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements
Alternative methods of modeling wind generation using production costing models
International Nuclear Information System (INIS)
Milligan, M.R.; Pang, C.K.
1996-08-01
This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
Energy models: methods and trends
Energy Technology Data Exchange (ETDEWEB)
Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)
1997-12-31
Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.
Energy models: methods and trends
International Nuclear Information System (INIS)
Reuter, A.; Kuehner, R.; Wohlgemuth, N.
1996-01-01
Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning
Graphical interpretation of numerical model results
International Nuclear Information System (INIS)
Drewes, D.R.
1979-01-01
Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements
Mathematical methods and models in composites
Mantic, Vladislav
2014-01-01
This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover
A Fuzzy Logic Based Method for Analysing Test Results
Directory of Open Access Journals (Sweden)
Le Xuan Vinh
2017-11-01
Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Ignalina NPP Safety Analysis: Models and Results
International Nuclear Information System (INIS)
Uspuras, E.
1999-01-01
Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)
Evaluating rehabilitation methods - some practical results from Rum Jungle
International Nuclear Information System (INIS)
Ryan, P.
1987-01-01
Research and analysis of the following aspects of rehabilitation have been conducted at the Rum Jungle mine site over the past three years: drainage structure stability; rock batter stability; soil fauna; tree growth in compacted soils; rehabilitation costs. The results show that, for future rehabilitation projects adopting refined methods, attention to final construction detail and biospheric influences is most important. The mine site offers a unique opportunity to evaluate the success of a variety of rehabilitation methods to the benefit of the industry in Australia overseas. It is intended that practical, economic, research will continue for some considerable time
Quantifying the measurement uncertainty of results from environmental analytical methods.
Moser, J; Wegscheider, W; Sperka-Gottlieb, C
2001-07-01
The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.
New method dynamically models hydrocarbon fractionation
Energy Technology Data Exchange (ETDEWEB)
Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)
1995-10-01
A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.
Studies of LMFBR: method of analysis and some results
International Nuclear Information System (INIS)
Ishiguro, Y.; Dias, A.F.; Nascimento, J.A. do.
1983-01-01
Some results of recent studies of LMFBR characteristics are summarized. A two-dimensional model of the LMFBR is taken from a publication and used as the base model for the analysis. Axial structures are added to the base model and a three-dimensional (Δ - Z) calculation has been done. Two dimensional (Δ and RZ) calculations are compared with the three-dimensional and published results. The eigenvalue, flux and power distributions, breeding characteristics, control rod worth, sodium-void and Doppler reactivities are analysed. Calculations are done by CITATION using six-group cross sections collapsed regionwise by EXPANDA in one-dimensional geometries from the 70-group JFS library. Burnup calculations of a simplified thorium-cycle LMFBR have also been done in the RZ geometry. Principal results of the studies are: (1) the JFS library appears adequate for predicting overall characteristics of an LMFBR, (2) the sodium void reactivity is negative within - 25 cm from the outer boundary of the core, (3) the halflife of Pa-233 must be considered explicitly in burnup analyses, and (4) two-dimensional (RZ and Δ) calculations can be used iteratively to analyze three-dimensional reactor systems. (Author) [pt
Microplasticity of MMC. Experimental results and modelling
International Nuclear Information System (INIS)
Maire, E.; Lormand, G.; Gobin, P.F.; Fougeres, R.
1993-01-01
The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.)
Microplasticity of MMC. Experimental results and modelling
Energy Technology Data Exchange (ETDEWEB)
Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))
1993-11-01
The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).
[Adverse events management. Methods and results of a development project].
Rabøl, Louise Isager; Jensen, Elisabeth Brøgger; Hellebek, Annemarie H; Pedersen, Beth Lilja
2006-11-27
This article describes the methods and results of a project in the Copenhagen Hospital Corporation (H:S) on preventing adverse events. The aim of the project was to raise awareness about patients' safety, test a reporting system for adverse events, develop and test methods of analysis of events and propagate ideas about how to prevent adverse events. H:S developed an action plan and a reporting system for adverse events, founded an organization and developed an educational program on theories and methods of learning from adverse events for both leaders and employees. During the three-year period from 1 January 2002 to 31 December 2004, the H:S staff reported 6011 adverse events. In the same period, the organization completed 92 root cause analyses. More than half of these dealt with events that had been optional to report, the other half events that had been mandatory to report. The number of reports and the front-line staff's attitude towards reporting shows that the H:S succeeded in founding a safety culture. Future work should be centred on developing and testing methods that will prevent adverse events from happening. The objective is to suggest and complete preventive initiatives which will help increase patient safety.
Processing method and results of meteor shower radar observations
International Nuclear Information System (INIS)
Belkovich, O.I.; Suleimanov, N.I.; Tokhtasjev, V.S.
1987-01-01
Studies of meteor showers permit the solving of some principal problems of meteor astronomy: to obtain the structure of a stream in cross section and along its orbits; to retrace the evolution of particle orbits of the stream taking into account gravitational and nongravitational forces and to discover the orbital elements of its parent body; to find out the total mass of solid particles ejected from the parent body taking into account physical and chemical evolution of meteor bodies; and to use meteor streams as natural probes for investigation of the average characteristics of the meteor complex in the solar system. A simple and effective method of determining the flux density and mass exponent parameter was worked out. This method and its results are discussed
Method of vacuum correlation functions: Results and prospects
International Nuclear Information System (INIS)
Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.
2006-01-01
Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s
Application of NUREG-1150 methods and results to accident management
International Nuclear Information System (INIS)
Dingman, S.; Sype, T.; Camp, A.; Maloney, K.
1991-01-01
The use of NUREG-1150 and similar probabilistic risk assessments in the Nuclear Regulatory Commission (NRC) and industry risk management programs is discussed. Risk management is more comprehensive than the commonly used term accident management. Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed
Application of NUREG-1150 methods and results to accident management
International Nuclear Information System (INIS)
Dingman, S.; Sype, T.; Camp, A.; Maloney, K.
1990-01-01
The use of NUREG-1150 and similar Probabilistic Risk Assessments in NRC and industry risk management programs is discussed. ''Risk management'' is more comprehensive than the commonly used term ''accident management.'' Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed. 2 refs., 3 figs
The Accident Sequence Precursor program: Methods improvements and current results
International Nuclear Information System (INIS)
Minarick, J.W.; Manning, F.M.; Harris, J.D.
1987-01-01
Changes in the US NRC Accident Sequence Precursor program methods since the initial program evaluations of 1969-81 operational events are described, along with insights from the review of 1984-85 events. For 1984-85, the number of significant precursors was consistent with the number observed in 1980-81, dominant sequences associated with significant events were reasonably consistent with PRA estimates for BWRs, but lacked the contribution due to small-break LOCAs previously observed and predicted in PWRs, and the frequency of initiating events and non-recoverable system failures exhibited some reduction compared to 1980-81. Operational events which provide information concerning additional PRA modeling needs are also described
Circulation in the Gulf of Trieste: measurements and model results
International Nuclear Information System (INIS)
Bogunovici, B.; Malacic, V.
2008-01-01
The study presents seasonal variability of currents in the southern part of the Gulf of Trieste. A time series analysis of currents and wind stress for the period 2003-2006, which were measured by the coastal oceanographic buoy, was conducted. A comparison between these data and results obtained from a numerical model of circulation in the Gulf was performed to validate model results. Three different approaches were applied to the wind data to determine the wind stress. Similarities were found between Kondo and Smith approaches while the method of Vera shows differences which were particularly noticeable for lower (= 1 m/s) and higher wind speeds (= 15 m/s). Mean currents in the surface layer are generally outflow currents from the Gulf due to wind forcing (bora). However in all other depth layers inflow currents are dominant. With the principal component analysis (Pca) major and minor axes were determined for all seasons. The major axis of maximum variance in years between 2003 and 2006 is prevailing in Ne-Sw direction, which is parallel to the coastline. Comparison of observation and model results is showing that currents are similar (in direction) for the surface and bottom layers but are significantly different for the middle layer (5-13 m). At a depth between 14-21 m velocities are comparable in direction as well as in magnitude even though model values are higher. Higher values of modelled currents at the surface and near the bottom are explained by higher values of wind stress that were used in the model as driving input with respect to the stress calculated from the measured winds. Larger values of modelled currents near the bottom are related to the larger inflow that needs to compensate for the larger modelled outflow at the surface. However, inspection of the vertical structure of temperature, salinity and density shows that the model is reproducing a weaker density gradient which enables the penetration of the outflow surface currents to larger depths.
Structural modeling techniques by finite element method
International Nuclear Information System (INIS)
Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong
1991-01-01
This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.
Computer-Aided Modelling Methods and Tools
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...
Assessing Internet energy intensity: A review of methods and results
Energy Technology Data Exchange (ETDEWEB)
Coroama, Vlad C., E-mail: vcoroama@gmail.com [Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Hilty, Lorenz M. [Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich (Switzerland); Empa, Swiss Federal Laboratories for Materials Science and Technology, Lerchenfeldstr. 5, 9014 St. Gallen (Switzerland); Centre for Sustainable Communications, KTH Royal Institute of Technology, Lindstedtsvägen 5, 100 44 Stockholm (Sweden)
2014-02-15
Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices
Assessing Internet energy intensity: A review of methods and results
International Nuclear Information System (INIS)
Coroama, Vlad C.; Hilty, Lorenz M.
2014-01-01
Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...
Mechatronic Systems Design Methods, Models, Concepts
Janschek, Klaus
2012-01-01
In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...
Methodics of computing the results of monitoring the exploratory gallery
Directory of Open Access Journals (Sweden)
Krúpa Víazoslav
2000-09-01
Full Text Available At building site of motorway tunnel Viòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Viòové Dubná skala motorway tunnel.
Storm-time ring current: model-dependent results
Directory of Open Access Journals (Sweden)
N. Yu. Ganushkina
2012-01-01
Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.
Boundary element method for modelling creep behaviour
International Nuclear Information System (INIS)
Zarina Masood; Shah Nor Basri; Abdel Majid Hamouda; Prithvi Raj Arora
2002-01-01
A two dimensional initial strain direct boundary element method is proposed to numerically model the creep behaviour. The boundary of the body is discretized into quadratic element and the domain into quadratic quadrilaterals. The variables are also assumed to have a quadratic variation over the elements. The boundary integral equation is solved for each boundary node and assembled into a matrix. This matrix is solved by Gauss elimination with partial pivoting to obtain the variables on the boundary and in the interior. Due to the time-dependent nature of creep, the solution has to be derived over increments of time. Automatic time incrementation technique and backward Euler method for updating the variables are implemented to assure stability and accuracy of results. A flowchart of the solution strategy is also presented. (Author)
Surface physics theoretical models and experimental methods
Mamonova, Marina V; Prudnikova, I A
2016-01-01
The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...
Test results of the SMES model coil. Pulse performance
International Nuclear Information System (INIS)
Hamajima, Takataro; Shimada, Mamoru; Ono, Michitaka
1998-01-01
A model coil for superconducting magnetic energy storage (SMES model coil) has been developed to establish the component technologies needed for a small-scale 100 kWh SMES device. The SMES model coil was fabricated, and then performance tests were carried out in 1996. The coil was successfully charged up to around 30 kA and down to zero at the same ramp rate of magnetic field experienced in a 100 kWh SMES device. AC loss in the coil was measured by an enthalpy method as parameters of ramp rate and flat top current. The results were evaluated by an analysis and compared with short-sample test results. The measured hysteresis loss is in good agreement with that estimated from the short-sample results. It was found that the coupling loss of the coil consists of two major coupling time constants. One is a short time constant of about 200 ms, which is in agreement with the test results of a short real conductor. The other is a long time constant of about 30 s, which could not be expected from the short sample test results. (author)
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
Radioimmunological determination of plasma progesterone. Methods - Results - Indications
International Nuclear Information System (INIS)
Gonon-Estrangin, Chantal.
1978-10-01
The aim of this work is to describe the radioimmunological determination of plasma progesterone carried out at the hormonology Laboratory of the Grenoble University Hospital Centre (Professor E. Chambaz), to compare our results with those of the literature and to present the main clinical indications of this analysis. The measurement method has proved reproducible, specific (the steroid purification stage is unnecessary) and sensitive (detection: 10 picograms of progesterone per tube). In seven normally menstruating women our results agree with published values: (in nanograms per millilitre: ng/ml) 0.07 ng/ml to 0.9 ng/ml in the follicular phase, from the start of menstruation until ovulation, then rapid increase at ovulation with a maximum in the middle of the luteal phase (our values for this maximum range from 7.9 ng/ml to 21.7 ng/ml) and gradual drop in progesterone secretion until the next menstrual period. In gynecology the radioimmunoassay of plasma progesterone is valuable for diagnostic and therapeutic purposes: - to diagnosis the absence of corpus luteum, - to judge the effectiveness of an ovulation induction treatment [fr
Lesion insertion in the projection domain: Methods and initial results
International Nuclear Information System (INIS)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-01-01
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Lesion insertion in the projection domain: Methods and initial results
Energy Technology Data Exchange (ETDEWEB)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-12-15
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Results of the benchmark for blade structural models, part A
DEFF Research Database (Denmark)
Lekou, D.J.; Chortis, D.; Belen Fariñas, A.
2013-01-01
A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...
Meteorological uncertainty of atmospheric dispersion model results (MUD)
Energy Technology Data Exchange (ETDEWEB)
Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others
2013-08-15
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)
Meteorological uncertainty of atmospheric dispersion model results (MUD)
International Nuclear Information System (INIS)
Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.
2013-08-01
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)
A novel method for assessing elbow pain resulting from epicondylitis
Polkinghorn, Bradley S.
2002-01-01
Abstract Objective To describe a novel orthopedic test (Polk's test) which can assist the clinician in differentiating between me- dial and lateral epicondylitis, 2 of the most common causes of elbow pain. This test has not been previously described in the literature. Clinical Features The testing procedure described in this paper is easy to learn, simple to perform and may provide the clinician with a quick and effective method of differentiating between lateral and medial epicondylitis. The test also helps to elucidate normal activities of daily living that the patient may unknowingly be performing on a repetitive basis that are hindering recovery. The results of this simple test allow the clinician to make immediate lifestyle recommendations to the patient that should improve and hasten the response to subsequent treatment. It may be used in conjunction with other orthopedic testing procedures, as it correlates well with other clinical tests for assessing epicondylitis. Conclusion The use of Polk's Test may help the clinician to diagnostically differentiate between lateral and medial epicondylitis, as well as supply information relative to choosing proper instructions for the patient to follow as part of their treatment program. Further research, performed in an academic setting, should prove helpful in more thoroughly evaluating the merits of this test. In the meantime, clinical experience over the years suggests that the practicing physician should find a great deal of clinical utility in utilizing this simple, yet effective, diagnostic procedure. PMID:19674572
Model Uncertainty Quantification Methods In Data Assimilation
Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.
2017-12-01
Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.
A Method for Model Checking Feature Interactions
DEFF Research Database (Denmark)
Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter
2015-01-01
This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....
Comparison of microstickies measurement methods. Part II, Results and discussion
Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang
2003-01-01
In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 Â°C to 65 Â°C in other cases. Sample size ranges from as low as...
Structural equation modeling methods and applications
Wang, Jichuan
2012-01-01
A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a
The review and results of different methods for facial recognition
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Resource costing for multinational neurologic clinical trials: methods and results.
Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H
1998-11-01
We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.
A Systematic Identification Method for Thermodynamic Property Modelling
DEFF Research Database (Denmark)
Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent
2017-01-01
In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...
Numerical methods and modelling for engineering
Khoury, Richard
2016-01-01
This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...
Effect of defuzzification method of fuzzy modeling
Lapohos, Tibor; Buchal, Ralph O.
1994-10-01
Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.
International Nuclear Information System (INIS)
Shin, Seung Ki; Seong, Poong Hyun
2008-01-01
Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Modelling a coal subcrop using the impedance method
Energy Technology Data Exchange (ETDEWEB)
Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering
2000-07-01
An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.
Cache memory modelling method and system
Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis
2011-01-01
The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...
A survey of real face modeling methods
Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng
2017-09-01
The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.
Results and current trends of nuclear methods used in agriculture
International Nuclear Information System (INIS)
Horacek, P.
1983-01-01
The significance is evaluated of nuclear methods for agricultural research. The number of breeds induced by radiation mutations is increasing. The main importance of radiation mutation breeding consists in obtaining sources of the desired genetic properties for further hybridization. Radiostimulation is conducted with the aim of increasing yields. The irradiation of foods has not substantially increased worldwide. Very important is the irradiation of excrements and sludges which after such inactivation of pathogenic microorganisms may be used as humus-forming manure or as feed additives. In some countries the method is successfully being used of sexual sterilization for eradication of insect pests. The application of labelled compounds in the nutrition, physiology and protection of plants, farm animals and in food hygiene makes it possible to acquire new and accurate knowledge very quickly. Radioimmunoassay is a highly promising method in this respect. Labelling compounds with the stable 15 N isotope is used for the research of nitrogen metabolism. (M.D.)
On Angular Sampling Methods for 3-D Spatial Channel Models
DEFF Research Database (Denmark)
Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum
2015-01-01
This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....
Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.
An automatic and effective parameter optimization method for model tuning
Directory of Open Access Journals (Sweden)
T. Zhang
2015-11-01
simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Fluid Methods for Modeling Large, Heterogeneous Networks
National Research Council Canada - National Science Library
Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal
2005-01-01
.... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...
Selection of robust methods. Numerical examples and results
Czech Academy of Sciences Publication Activity Database
Víšek, Jan Ámos
2005-01-01
Roč. 21, č. 11 (2005), s. 1-58 ISSN 1212-074X R&D Projects: GA ČR(CZ) GA402/03/0084 Institutional research plan: CEZ:AV0Z10750506 Keywords : robust regression * model selection * uniform consistency of M-estimators Subject RIV: BA - General Mathematics
Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results
International Nuclear Information System (INIS)
Banov, Reni; Simic, Zdenko; Sterc, Davor
2014-01-01
Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)
Application of NDE methods to green ceramics: initial results
International Nuclear Information System (INIS)
Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.
1984-03-01
This paper describes a preliminary investigation to assess the effectiveness of microradiography, ultrasonic methods, nuclear magnetic resonance, and neutron radiography for the nondestructive evaluation of green (unfired), ceramics. Objective is to obtain useful information on defects, cracking, delaminations, agglomerates, inclusions, regions of high porosity, and anisotropy
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Wide Binaries in TGAS: Search Method and First Results
Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.
2018-04-01
Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.
Viscous wing theory development. Volume 1: Analysis, method and results
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
Algorithms for monitoring warfarin use: Results from Delphi Method.
Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane
2017-10-01
Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.
Comparison of multiple-criteria decision-making methods - results of simulation study
Directory of Open Access Journals (Sweden)
Michał Adamczak
2016-12-01
Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.
Gait variability: methods, modeling and meaning
Directory of Open Access Journals (Sweden)
Hausdorff Jeffrey M
2005-07-01
Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.
Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling
Wilson, William; Atkinson, Gary
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.
Fuzzy Clustering Methods and their Application to Fuzzy Modeling
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1999-01-01
Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate....... An illustrative synthetic example is analyzed, and prediction accuracy measures are compared between the different variants...
The uncertainty analysis of model results a practical guide
Hofer, Eduard
2018-01-01
This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.
Results and Error Estimates from GRACE Forward Modeling over Antarctica
Bonin, Jennifer; Chambers, Don
2013-04-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.
Further results for crack-edge mappings by ray methods
International Nuclear Information System (INIS)
Norris, A.N.; Achenbach, J.D.; Ahlberg, L.; Tittman, B.R.
1984-01-01
This chapter discusses further extensions of the local edge mapping method to the pulse-echo case and to configurations of water-immersed specimens and transducers. Crack edges are mapped by the use of arrival times of edge-diffracted signals. Topics considered include local edge mapping in a homogeneous medium, local edge mapping algorithms, local edge mapping through an interface, and edge mapping through an interface using synthetic data. Local edge mapping is iterative, with two or three iterations required for convergence
Method of fabricating nested shells and resulting product
Henderson, Timothy M.; Kool, Lawrence B.
1982-01-01
A multiple shell structure and a method of manufacturing such structure wherein a hollow glass microsphere is surface treated in an organosilane solution so as to render the shell outer surface hydrophobic. The surface treated glass shell is then suspended in the oil phase of an oil-aqueous phase dispersion. The oil phase includes an organic film-forming monomer, a polymerization initiator and a blowing agent. A polymeric film forms at each phase boundary of the dispersion and is then expanded in a blowing operation so as to form an outer homogeneously integral monocellular substantially spherical thermoplastic shell encapsulating an inner glass shell of lesser diameter.
V and V Efforts of Auroral Precipitation Models: Preliminary Results
Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael
2011-01-01
Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.
How to: understanding SWAT model uncertainty relative to measured results
Watershed models are being relied upon to contribute to most policy-making decisions of watershed management, and the demand for an accurate accounting of complete model uncertainty is rising. Generalized likelihood uncertainty estimation (GLUE) is a widely used method for quantifying uncertainty i...
Lesion insertion in the projection domain: Methods and initial results.
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-12-01
phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors' method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors' method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions.
Assessment of South African uranium resources: methods and results
International Nuclear Information System (INIS)
Camisani-Calzolari, F.A.G.M.; De Klerk, W.J.; Van der Merwe, P.J.
1985-01-01
This paper deals primarily with the methods used by the Atomic Energy Corporation of South Africa, in arriving at the assessment of the South African uranium resources. The Resource Evaluation Group is responsible for this task, which is carried out on a continuous basis. The evaluation is done on a property-by-property basis and relies upon data submitted to the Nuclear Development Corporation of South Africa by the various companies involved in uranium mining and prospecting in South Africa. Resources are classified into Reasonably Assured (RAR), Estimated Additional (EAR) and Speculative (SR) categories as defined by the NEA/IAEA Steering Group on Uranium Resources. Each category is divided into three categories, viz, resources exploitable at less than $80/kg uranium, at $80-130/kg uranium and at $130-260/kg uranium. Resources are reported in quantities of uranium metal that could be recovered after mining and metallurgical losses have been taken into consideration. Resources in the RAR and EAR categories exploitable at costs of less than $130/kg uranium are now estimated at 460 000 t uranium which represents some 14 per cent of WOCA's (World Outside the Centrally Planned Economies Area) resources. The evaluation of a uranium venture is carried out in various steps, of which the most important, in order of implementation, are: geological interpretation, assessment of in situ resources using techniques varying from manual contouring of values, geostatistics, feasibility studies and estimation of recoverable resources. Because the choice of an evaluation method is, to some extent, dictated by statistical consderations, frequency distribution curves of the uranium grade variable are illustrated and discussed for characteristic deposits
UV spectroscopy applied to stratospheric chemistry, methods and results
Energy Technology Data Exchange (ETDEWEB)
Karlsen, K.
1996-03-01
The publication from the Norwegian Institute for Air Research (NILU) deals with an investigation done on stratospheric chemistry by UV spectroscopy. The scientific goals are briefly discussed, and it gives the results from the measuring and analysing techniques used in the investigation. 6 refs., 11 figs.
Modelling methods for milk intake measurements
International Nuclear Information System (INIS)
Coward, W.A.
1999-01-01
One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used
Creep in rock salt with temperature. Testing methods and results
International Nuclear Information System (INIS)
Charpentier, J.P.; Berest, P.
1985-01-01
The growing interest shown in the delayed behaviour of rocks at elevated temperature has led the Solid Mechanics Laboratory to develop specific equipment designed for creep tests. The design and dimensioning of these units offer the possibility of investigating a wide range of materials. The article describes the test facilities used (uni-axial and tri-axial creep units) and presents the experimental results obtained on samples of Bresse salt [fr
TMI-2 core debris analytical methods and results
International Nuclear Information System (INIS)
Akers, D.W.; Cook, B.A.
1984-01-01
A series of six grab samples was taken from the debris bed of the TMI-2 core in early September 1983. Five of these samples were sent to the Idaho National Engineering Laboratory for analysis. Presented is the analysis strategy for the samples and some of the data obtained from the early stages of examination of the samples (i.e., particle size-analysis, gamma spectrometry results, and fissile/fertile material analysis)
Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method
Directory of Open Access Journals (Sweden)
Ghanmi Helmi
2018-05-01
Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.
Diffuse interface methods for multiphase flow modeling
International Nuclear Information System (INIS)
Jamet, D.
2004-01-01
Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one
Methods and results of radiotherapy in case of medulloblastoma
International Nuclear Information System (INIS)
Bamberg, M.; Sauerwein, W.; Scherer, E.
1982-01-01
The prognosis of the medulloblastoma with its marked tendency towards early formation of metastases by way of liquor circulation can be decisively improved by post-surgical homogenous irradiation. A successful radiotherapy is only possible by means of new irradiation methods which have been developed for high-voltage units during recent years and which require great experience and skill on the part of the radiotherapeutist. At the Radiological Centre of Essen, 26 patients with medulloblastoma have been submitted to such a specially developed post-surgical radiotherapy since 1974. After a follow-up period of at most seven years, 16 patients have survived (two of them with recurrences) and 10 patients died because of a local recurrence. In dependence on the patient's state of health after surgery and before irradiation, the neurologic state and physical condition of these patients seem favorable after unique post-operative radiotherapy. New therapeutic possibilities are provided by radiosensitizing substances. The actually most effective radiosensitizer Misonidazol, however, could not respond hitherto to clinical expectances. (orig.) [de
Application of NDE methods to green ceramics: initial results
International Nuclear Information System (INIS)
Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.
1983-01-01
The effectiveness of microradiography, ultrasonic methods, unclear magnetic resonance, and neutron radiography was assessed for the nondestructive evaluation of green (unfired) ceramics. The application of microradiography to ceramics is reviewed, and preliminary experiments with a commercial microradiography unit are described. Conventional ultrasonic techniques are difficult to apply to flaw detection green ceramics because of the high attenuation, fragility, and couplant-absorbing properties of these materials. However, velocity, attenuation, and spectral data were obtained with pressure-coupled transducers and provided useful informaion related to density variations and the presence of agglomerates. Nuclear magnetic resonance (NMR) imaging techniques and neutron radiography were considered for detection of anomalies in the distribution of porosity. With NMR, areas of high porosity might be detected after the samples are doped with water. In the case of neutron radiography, although imaging the binder distribution throughout the sample may not be feasible because of the low overall concentration of binder, regions of high binder concentration (thus high porosity) should be detectable
New test methods for BIPV. Results from IP performance
International Nuclear Information System (INIS)
Jol, J.C.; Van Kampen, B.J.M.; De Boer, B.J.; Reil, F.; Geyer, D.
2009-11-01
Within the Performance project new test procedures for PV building products and the building performance as a whole when PV is applied in buildings have been drafted. It has resulted in a first draft of new test procedures for PV building products and proposals for tests for novel BIPV technology like thin film. The test proposed are a module breakage test for BIPV products, a fire safety test for BIPV products and a dynamic load test for BIPV products. Furthermore first proposals of how flexible PV modules could be tested in an appropriate way to ensure long time quality and safety of these new products are presented.
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Numerical methods for modeling photonic-crystal VCSELs
DEFF Research Database (Denmark)
Dems, Maciej; Chung, Il-Sug; Nyakas, Peter
2010-01-01
We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....
Diamagnetic measurements on ISX-B: method and results
International Nuclear Information System (INIS)
Neilson, G.H.
1983-10-01
A diamagnetic loop is used on the ISX-B tokamak to measure the change in toroidal magnetic flux, sigma phi, caused by finite plasma current and perpendicular pressure. From this measurement, the perpendicular poloidal beta β/sub I perpendicular to/ is determined. The principal difficulty encountered is in identifying and making corrections for various noise components which appear in the measured flux. These result from coupling between the measuring loops and the toroidal and poloidal field windings, both directly and through currents induced in the vacuum vessel and coils themselves. An analysis of these couplings is made and techniques for correcting them developed. Results from the diamagnetic measurement, employing some of these correction techniques, are presented and compared with other data. The obtained values of β/sub I perpendicular to/ agree with those obtained from the equilibrium magnetic analysis (β/sub IΔ/) in ohmically heated plasmas, indicating no anisotropy. However, with 0.3 to 2.0 MW of tangential neutral beam injection, β/sub IΔ/ is consistently greater than β/sub I pependicular to/ and qualitatively consistent with the formation of an anisotropic ion velocity distribution and with toroidal rotation. Quantitatively, the difference between β/sub IΔ/ and β/sub I perpendicular to/ is more than can be accounted for on the basis of the usual classical fast ion calculations and spectroscopic rotation measurements
[Integrated intensive treatment of tinnitus: method and initial results].
Mazurek, B; Georgiewa, P; Seydel, C; Haupt, H; Scherer, H; Klapp, B F; Reisshauer, A
2005-07-01
In recent years, no major advances have been made in understanding the mechanisms underlying the development of tinnitus. Hence, the present therapeutic strategies aim at decoupling the subconscious from the perception of tinnitus. Mindful of the lessons drawn from existing tinnitus retraining and desensitisation therapies, a new integrated day hospital strategy of treatment lasting 7-14 days has been developed at the Charité Hospital and is presented in the present paper. The strategy for treating tinnitus in the proximity of patient domicile is designed for patients who feel disturbed in their world of perception and their efficiency due to tinnitus and give evidence of mental and physical strain. In view of the etiologically non-uniform and multiple events connected with tinnitus, consideration was also given to the fact that somatic and psychosocial factors are equally involved. Therefore, therapy should aim at diagnosing and therapeutically influencing those psychosocial factors that reduce the hearing impression to such an extent that the affected persons suffer from strain. The first results of therapy-dependent changes of 46 patients suffering from chronic tinnitus are presented. The data were evaluated before and after 7 days of treatment and 6 months after the end of treatment. Immediately after the treatment, the scores of both the tinnitus questionnaire (Goebel and Hiller) and the subscales improved significantly. These results were maintained during the 6-month post-treatment period and even improved.
Verification of aseismic design model by using experimental results
International Nuclear Information System (INIS)
Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.
1985-01-01
A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)
Ilmenite exploration on the Senegal continental shelf. Methods and results
International Nuclear Information System (INIS)
Horn, R.; Le Lann, F.; Scolari, G.; Tixeront, M.
1975-01-01
From the results of a study based on geomorphology, geophysics and sedimentology, it has been possible to point out, South of Dakar, the existence of a fossil lagoon (peat dated 8400 years BP) partly isolated from the open sea by a littoral sand bar at -25m and strongly eroded. To the North of Dakar, the unconsolidated sediments, with a thickness over 40m, are thinning out seawards and from North of Dakar, the unconsolidated sediments, with a thickness over 40m, are thinning out seawards and from North to South. This pattern reflect the action of the longshore current which overstates the drainage effect to the Cayar canyon. The distribution of ilmenites in the sediments is studied in terms of a possible exploitation however, the grades are too low in the present economic conditions [fr
Model-Based Method for Sensor Validation
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
Numerical proceessing of radioimmunoassay results using logit-log transformation method
International Nuclear Information System (INIS)
Textoris, R.
1983-01-01
The mathematical model and algorithm are described of the numerical processing of the results of a radioimmunoassay by the logit-log transformation method and by linear regression with weight factors. The limiting value of the curve for zero concentration is optimized with regard to the residual sum by the iterative method by multiple repeats of the linear regression. Typical examples are presented of the approximation of calibration curves. The method proved suitable for all hitherto used RIA sets and is well suited for small computers with internal memory of min. 8 Kbyte. (author)
International Nuclear Information System (INIS)
Mitchell, A.E. Jr.
1982-01-01
Four methods of classifying atmospheric stability class are applied at four sites to make short-term (1-h) dispersion estimates from a ground-level source based on a model consistent with U.S. Nuclear Regulatory Commission practice. The classification methods include vertical temperature gradient, standard deviation of horizontal wind direction fluctuations (sigma theta), Pasquill-Turner, and modified sigma theta which accounts for meander. Results indicate that modified sigma theta yields reasonable dispersion estimates compared to those produced using methods of vertical temperature gradient and Pasquill-Turner, and can be considered as a potential economic alternative in establishing onsite monitoring programs. (author)
Developing a TQM quality management method model
Zhang, Zhihai
1997-01-01
From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This
Climate Action Gaming Experiment: Methods and Example Results
Directory of Open Access Journals (Sweden)
Clifford Singer
2015-09-01
Full Text Available An exercise has been prepared and executed to simulate international interactions on policies related to greenhouse gases and global albedo management. Simulation participants are each assigned one of six regions that together contain all of the countries in the world. Participants make quinquennial policy decisions on greenhouse gas emissions, recapture of CO2 from the atmosphere, and/or modification of the global albedo. Costs of climate change and of implementing policy decisions impact each region’s gross domestic product. Participants are tasked with maximizing economic benefits to their region while nearly stabilizing atmospheric CO2 concentrations by the end of the simulation in Julian year 2195. Results are shown where regions most adversely affected by effects of greenhouse gas emissions resort to increases in the earth’s albedo to reduce net solar insolation. These actions induce temperate region countries to reduce net greenhouse gas emissions. An example outcome is a trajectory to the year 2195 of atmospheric greenhouse emissions and concentrations, sea level, and global average temperature.
COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS
International Nuclear Information System (INIS)
Bekki, Kenji
2015-01-01
We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H 2 ) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H 2 gas fraction (f H 2 ), and gas-phase chemical abundances (e.g., A O = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f H 2 can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A O -D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A O is smaller. The simulated galaxies with larger total dust masses show larger H 2 and stellar masses and higher f H 2 . Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z
COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS
Energy Technology Data Exchange (ETDEWEB)
Bekki, Kenji [ICRAR, M468, The University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009 (Australia)
2015-02-01
We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H{sub 2}) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H{sub 2} gas fraction (f{sub H{sub 2}}), and gas-phase chemical abundances (e.g., A {sub O} = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f{sub H{sub 2}} can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A {sub O}-D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A {sub O} is smaller. The simulated galaxies with larger total dust masses show larger H{sub 2} and stellar masses and higher f{sub H{sub 2}}. Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Physical Model Method for Seismic Study of Concrete Dams
Directory of Open Access Journals (Sweden)
Bogdan Roşca
2008-01-01
Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.
Acceleration methods and models in Sn calculations
International Nuclear Information System (INIS)
Sbaffoni, M.M.; Abbate, M.J.
1984-01-01
In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es
Design of nuclear power generation plants adopting model engineering method
International Nuclear Information System (INIS)
Waki, Masato
1983-01-01
The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)
A Comparison of Surface Acoustic Wave Modeling Methods
Wilson, W. c.; Atkinson, G. M.
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.
Intelligent structural optimization: Concept, Model and Methods
International Nuclear Information System (INIS)
Lu, Dagang; Wang, Guangyuan; Peng, Zhang
2002-01-01
Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented
Electromagnetic modeling method for eddy current signal analysis
International Nuclear Information System (INIS)
Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.
2004-10-01
An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs
Mathematical Models and Methods for Living Systems
Chaplain, Mark; Pugliese, Andrea
2016-01-01
The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.
Generalised Chou-Yang model and recent results
International Nuclear Information System (INIS)
Fazal-e-Aleem; Rashid, H.
1995-09-01
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs
Generalised Chou-Yang model and recent results
Energy Technology Data Exchange (ETDEWEB)
Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics
1996-12-31
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.
Generalised Chou-Yang model and recent results
International Nuclear Information System (INIS)
Fazal-e-Aleem; Rashid, H.
1996-01-01
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)
Recent numerical results on the two dimensional Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E. (SISSA, Trieste (Italy))
1989-12-01
A new method for simulating strongly correlated fermionic systems, has been applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution. (orig.).
Recent numerical results on the two dimensional Hubbard model
International Nuclear Information System (INIS)
Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E.
1989-01-01
This paper reports a new method for simulating strongly correlated fermionic systems applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution
Stencil method: a Markov model for transport in porous media
Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.
2016-12-01
In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.
Method of generating a computer readable model
DEFF Research Database (Denmark)
2008-01-01
A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...
Krantz, Timothy L.
2011-01-01
The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.
Results from the IAEA benchmark of spallation models
International Nuclear Information System (INIS)
Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.
2011-01-01
Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)
3D Face modeling using the multi-deformable method.
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-09-25
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.
ExEP yield modeling tool and validation test results
Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul
2017-09-01
EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Directory of Open Access Journals (Sweden)
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
VNIR spectral modeling of Mars analogue rocks: first results
Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.
Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of
Engineering design of systems models and methods
Buede, Dennis M
2009-01-01
The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...
A Pansharpening Method Based on HCT and Joint Sparse Model
Directory of Open Access Journals (Sweden)
XU Ning
2016-04-01
Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Annular dispersed flow analysis model by Lagrangian method and liquid film cell method
International Nuclear Information System (INIS)
Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.
2003-01-01
A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected
Melt coolability modeling and comparison to MACE test results
International Nuclear Information System (INIS)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1992-01-01
An important question in the assessment of severe accidents in light water nuclear reactors is the ability of water to quench a molten corium-concrete interaction and thereby terminate the accident progression. As part of the Melt Attack and Coolability Experiment (MACE) Program, phenomenological models of the corium quenching process are under development. The modeling approach considers both bulk cooldown and crust-limited heat transfer regimes, as well as criteria for the pool thermal hydraulic conditions which separate the two regimes. The model is then compared with results of the MACE experiments
Railway Track Allocation: Models and Methods
DEFF Research Database (Denmark)
Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias
2011-01-01
Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a conflict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...
Railway Track Allocation: Models and Methods
DEFF Research Database (Denmark)
Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias
Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...
ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING
Directory of Open Access Journals (Sweden)
Brînduşa-Antonela SBÎRCEA
2011-01-01
Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.
Experimental modeling methods in Industrial Engineering
Directory of Open Access Journals (Sweden)
Peter Trebuňa
2009-03-01
Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".
Mechanics, Models and Methods in Civil Engineering
Maceri, Franco
2012-01-01
„Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.
The forward tracking, an optical model method
Benayoun, M
2002-01-01
This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.
Statistical Models and Methods for Lifetime Data
Lawless, Jerald F
2011-01-01
Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,
Vortex Tube Modeling Using the System Identification Method
Energy Technology Data Exchange (ETDEWEB)
Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)
2017-05-15
In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.
Modeling conflict : research methods, quantitative modeling, and lessons learned.
Energy Technology Data Exchange (ETDEWEB)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
International Nuclear Information System (INIS)
Park, Inseok; Grandhi, Ramana V.
2014-01-01
Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process
Energy Technology Data Exchange (ETDEWEB)
Luneville, L
1998-06-01
The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author) 15 refs.
Relationship Marketing results: proposition of a cognitive mapping model
Directory of Open Access Journals (Sweden)
Iná Futino Barreto
2015-12-01
Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.
Modeling Storm Surges Using Discontinuous Galerkin Methods
2016-06-01
layer non-reflecting boundary condition (NRBC) on the right wall of the model. A NRBC is when an artificial boundary , B, is created, which truncates the... applications ,” Journal of Computational Physics, 2004. [30] P. L. Butzer and R. Weis, “On the lax equivalence theorem equipped with orders,” Journal of...closer to the shoreline. In our simulation, we also learned of the effects spurious waves can have on the results. Due to boundary conditions, a
Functional results-oriented healthcare leadership: a novel leadership model.
Al-Touby, Salem Said
2012-03-01
This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.
Generalized framework for context-specific metabolic model extraction methods
Directory of Open Access Journals (Sweden)
Semidán eRobaina Estévez
2014-09-01
Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
Value of the distant future: Model-independent results
Katz, Yuri A.
2017-01-01
This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.
Moments Method for Shell-Model Level Density
International Nuclear Information System (INIS)
Zelevinsky, V; Horoi, M; Sen'kov, R A
2016-01-01
The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Evaluation process radiological in ternopil region method of box models
Directory of Open Access Journals (Sweden)
І.В. Матвєєва
2006-02-01
Full Text Available Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.
Modelling viscoacoustic wave propagation with the lattice Boltzmann method.
Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen
2017-08-31
In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.
Methodology and Results of Mathematical Modelling of Complex Technological Processes
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
Modeling vertical loads in pools resulting from fluid injection
International Nuclear Information System (INIS)
Lai, W.; McCauley, E.W.
1978-01-01
Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena
A method for model identification and parameter estimation
International Nuclear Information System (INIS)
Bambach, M; Heinkenschloss, M; Herty, M
2013-01-01
We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Interval estimation methods of the mean in small sample situation and the results' comparison
International Nuclear Information System (INIS)
Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen
2009-01-01
The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Results of the eruptive column model inter-comparison study
Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza
2016-01-01
This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.
Delta-tilde interpretation of standard linear mixed model results
DEFF Research Database (Denmark)
Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra
2016-01-01
effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...
Mathematical models and methods for planet Earth
Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta
2014-01-01
In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.
The research methods and model of protein turnover in animal
International Nuclear Information System (INIS)
Wu Xilin; Yang Feng
2002-01-01
The author discussed the concept and research methods of protein turnover in animal body. The existing problems and the research results of animal protein turnover in recent years were presented. Meanwhile, the measures to improve the models of animal protein turnover were analyzed
Application of the simplex method of linear programming model to ...
African Journals Online (AJOL)
This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...
Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.
Diffusion in condensed matter methods, materials, models
Kärger, Jörg
2005-01-01
Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.
FDTD method and models in optical education
Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe
2017-08-01
In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.
A meshless method for modeling convective heat transfer
Energy Technology Data Exchange (ETDEWEB)
Carrington, David B [Los Alamos National Laboratory
2010-01-01
A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.
First experiments results about the engineering model of Rapsodie
International Nuclear Information System (INIS)
Chalot, A.; Ginier, R.; Sauvage, M.
1964-01-01
This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr
Reconstructing Holocene climate using a climate model: Model strategy and preliminary results
Haberkorn, K.; Blender, R.; Lunkeit, F.; Fraedrich, K.
2009-04-01
An Earth system model of intermediate complexity (Planet Simulator; PlaSim) is used to reconstruct Holocene climate based on proxy data. The Planet Simulator is a user friendly general circulation model (GCM) suitable for palaeoclimate research. Its easy handling and the modular structure allow for fast and problem dependent simulations. The spectral model is based on the moist primitive equations conserving momentum, mass, energy and moisture. Besides the atmospheric part, a mixed layer-ocean with sea ice and a land surface with biosphere are included. The present-day climate of PlaSim, based on an AMIP II control-run (T21/10L resolution), shows reasonable agreement with ERA-40 reanalysis data. Combining PlaSim with a socio-technological model (GLUES; DFG priority project INTERDYNAMIK) provides improved knowledge on the shift from hunting-gathering to agropastoral subsistence societies. This is achieved by a data assimilation approach, incorporating proxy time series into PlaSim to initialize palaeoclimate simulations during the Holocene. For this, the following strategy is applied: The sensitivities of the terrestrial PlaSim climate are determined with respect to sea surface temperature (SST) anomalies. Here, the focus is the impact of regionally varying SST both in the tropics and the Northern Hemisphere mid-latitudes. The inverse of these sensitivities is used to determine the SST conditions necessary for the nudging of land and coastal proxy climates. Preliminary results indicate the potential, the uncertainty and the limitations of the method.
A Versatile Nonlinear Method for Predictive Modeling
Liou, Meng-Sing; Yao, Weigang
2015-01-01
As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.
Acoustic results of the Boeing model 360 whirl tower test
Watts, Michael E.; Jordan, David
1990-09-01
An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.
Exact results for the one dimensional asymmetric exclusion model
International Nuclear Information System (INIS)
Derrida, B.; Evans, M.R.; Pasquier, V.
1993-01-01
The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author)
Sarnadskiĭ, V N
2007-01-01
The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Review of Current Standard Model Results in ATLAS
Brandt, Gerhard; The ATLAS collaboration
2018-01-01
This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.
DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS
Directory of Open Access Journals (Sweden)
S. A. Bornyakov
2016-01-01
Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of
Statistical learning modeling method for space debris photometric measurement
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
2016-03-01
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Comparison of transient PCRV model test results with analysis
International Nuclear Information System (INIS)
Marchertas, A.H.; Belytschko, T.B.
1979-01-01
Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The load history was obtained from an ICECO analysis, using the equations of state for the source and the water. A detailed check of this solution was made to assure that the derived loading did provide the correct input. The DYNAPCON code was then used for the analysis of the prestressed concrete containment model. This analysis required the simulation of prestressing and the response of the model to the applied transient load. The calculations correctly predict the magnitudes of displacements of the PCRV model. In addition, the displacement time histories obtained from the calculations reproduce the general features of the experimental records: the period elongation and amplitude increase as compared to an elastic solution, and also the absence of permanent displacement. However, the period still underestimates the experiment, while the amplitude is generally somewhat large
Thermal-Chemical Model Of Subduction: Results And Tests
Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.
2005-12-01
Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.
Methods and models used in comparative risk studies
International Nuclear Information System (INIS)
Devooght, J.
1983-01-01
Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr
Free wake models for vortex methods
Energy Technology Data Exchange (ETDEWEB)
Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)
1997-08-01
The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)
A Kriging Model Based Finite Element Model Updating Method for Damage Detection
Directory of Open Access Journals (Sweden)
Xiuming Yang
2017-10-01
Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.
Measurement model choice influenced randomized controlled trial results.
Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos
2016-11-01
In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Continuum-Kinetic Models and Numerical Methods for Multiphase Applications
Nault, Isaac Michael
This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.
Stress description model by non destructive magnetic methods
International Nuclear Information System (INIS)
Flambard, C.; Grossiord, J.L.; Tourrenc, P.
1983-01-01
Since a few years, CETIM investigates analysis possibilities of materials, by developing a method founded on observation of ferromagnetic noise. By experiments, correlations have become obvious between state of the material and recorded signal. These correlations open to industrial applications to measure stresses and strains in elastic and plastic ranges. This article starts with a brief historical account and theoretical backgrounds of the method. The experimental frame of this research is described, and the main results are analyzed. Theoretically, a model was built up, and we present it. It seems in agreement with some experimental observations. The main results concerning stress application, thermal and surface treatments (decarbonizing) are presented [fr
CAD-based automatic modeling method for Geant4 geometry model through MCAM
International Nuclear Information System (INIS)
Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.
2013-01-01
The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)
SR-Site groundwater flow modelling methodology, setup and results
International Nuclear Information System (INIS)
Selroos, Jan-Olof; Follin, Sven
2010-12-01
As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report
Geochemical controls on shale groundwaters: Results of reaction path modeling
International Nuclear Information System (INIS)
Von Damm, K.L.; VandenBrook, A.J.
1989-03-01
The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs
Loss of spent fuel pool cooling PRA: Model and results
International Nuclear Information System (INIS)
Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.
1996-09-01
This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible
SR-Site groundwater flow modelling methodology, setup and results
Energy Technology Data Exchange (ETDEWEB)
Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))
2010-12-15
As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.
Finite elements volumes methods: applications to the Navier-Stokes equations and convergence results
International Nuclear Information System (INIS)
Emonot, P.
1992-01-01
In the first chapter are described the equations modeling incompressible fluid flow and a quick presentation of finite volumes method. The second chapter is an introduction to the finite elements volumes method. The box model is described and a method adapted to Navier-Stokes problems is proposed. The third chapter shows a fault analysis of the finite elements volumes method for the Laplacian problem and some examples in one, two, three dimensional calculations. The fourth chapter is an extension of the error analysis of the method for the Navier-Stokes problem
Directory of Open Access Journals (Sweden)
Christopher Heine
2014-08-01
Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.
Preliminary results of steel containment vessel model test
International Nuclear Information System (INIS)
Matsumoto, T.; Komine, K.; Arai, S.
1997-01-01
A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented
Results of the ITER toroidal field model coil project
International Nuclear Information System (INIS)
Salpietro, E.; Maix, R.
2001-01-01
In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved
International Nuclear Information System (INIS)
Reventos, F.
2008-01-01
One of the goals of computer code models of Nuclear Power Plants (NPP) is to demonstrate that these are designed to respond safely at postulated accidents. Models and codes are an approximation of the real physical behaviour occurring during a hypothetical transient and the data used to build these models are also known with certain accuracy. Therefore code predictions are uncertain. The BEMUSE programme is focussed on the application of uncertainty methodologies to large break LOCAs. The programme intends to evaluate the practicability, quality and reliability of best-estimate methods including uncertainty evaluations in applications relevant to nuclear reactor safety, to develop common understanding and to promote/facilitate their use by the regulator bodies and the industry. In order to fulfil its objectives BEMUSE is organized in to steps and six phases. The first step is devoted to the complete analysis of a LB-LOCA (L2-5) in an experimental facility (LOFT) while the second step refers to an actual Nuclear Power Plant. Both steps provide results on thermalhydraulic Best Estimate simulation as well as Uncertainty and sensitivity evaluation. At the time this paper is prepared, phases I, II and III are fully completed and the corresponding reports have been issued. Phase IV draft report is by now being reviewed while participants are working on Phase V developments. Phase VI consists in preparing the final status report which will summarizes the most relevant results of the whole programme.
Storm surge model based on variational data assimilation method
Directory of Open Access Journals (Sweden)
Shi-li Huang
2010-06-01
Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.
An efficient method for model refinement in diffuse optical tomography
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
(Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop
The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...
GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour
DEFF Research Database (Denmark)
Panferov, O.; Sogachev, Andrey; Ahrends, B.
2010-01-01
The structure of forests stands changes continuously as a result of forest growth and both natural and anthropogenic disturbances like windthrow or management activities – planting/cutting of trees. These structure changes can stabilize or destabilize forest stands in terms of their resistance...... to wind damage. The driving force behind the damage is the climate, but the magnitude and sign of resulting effect depend on tree species, management method and soil conditions. The projected increasing frequency of weather extremes in the whole and severe storms in particular might produce wide area...... damage in European forest ecosystems during the 21st century. To assess the possible wind damage and stabilization/destabilization effects of forest management a number of numeric experiments are carried out for the region of Solling, Germany. The coupled small-scale process-based model combining Brook90...
Comparison of transient PCRV model test results with analysis
International Nuclear Information System (INIS)
Marchertas, A.H.; Belytschko, T.B.
1979-01-01
Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The tests are very similar to the series of tests made for the COVA experimental program, but the vessel here is the prestressed concrete container. (orig.)
International Nuclear Information System (INIS)
Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.
2010-01-01
Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made
Chu, Chunlei; Stoffa, Paul L.
2012-01-01
sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced
An alternative method for centrifugal compressor loading factor modelling
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Computation and experiment results of the grounding model of Three Gorges Power Plant
Energy Technology Data Exchange (ETDEWEB)
Wen Xishan; Zhang Yuanfang; Yu Jianhui; Chen Cixuan [Wuhan University of Hydraulic and Electrical Engineering (China); Qin Liming; Xu Jun; Shu Lianfu [Yangtze River Water Resources Commission, Wuhan (China)
1999-07-01
A model for the computation of the grounding parameters of the grids of Three Gorges Power Plant (TGPP) on the Yangtze River is presented in this paper. Using this model computation and analysis of grounding grids is carried out. The results show that reinforcing the grid of the dam is the main body of current dissipation. It must be reliably welded to form a good grounding grid. The experimental results show that the method and program of the computations are correct. (UK)
INTRAVAL Finnsjoen Test - modelling results for some tracer experiments
International Nuclear Information System (INIS)
Jakob, A.; Hadermann, J.
1994-09-01
This report presents the results within Phase II of the INTRAVAL study. Migration experiments performed at the Finnsjoen test site were investigated. The study was done to gain an improved understanding of not only the mechanisms of tracer transport, but also the accuracy and limitations of the model used. The model is based on the concept of a dual porosity medium, taking into account one dimensional advection, longitudinal dispersion, sorption onto the fracture surfaces, diffusion into connected pores of the matrix rock, and sorption onto matrix surfaces. The number of independent water carrying zones, represented either as planar fractures or tubelike veins, may be greater than one, and the sorption processes are described either by linear or non-linear Freundlich isotherms assuming instantaneous sorption equilibrium. The diffusion of the tracer out of the water-carrying zones into connected pore space of the adjacent rock is calculated perpendicular to the direction of the advective/dispersive flow. In the analysis, the fluid flow parameters are calibrated by the measured breakthrough curves for the conservative tracer (iodide). Subsequent fits to the experimental data for the two sorbing tracers strontium and cesium then involve element dependent parameters providing information on the sorption processes and on its representation in the model. The methodology of fixing all parameters except those for sorption with breakthrough curves for non-sorbing tracers generally worked well. The investigation clearly demonstrates the necessity of taking into account pump flow rate variations at both boundaries. If this is not done, reliable conclusions on transport mechanisms or geometrical factors can not be achieved. A two flow path model reproduces the measured data much better than a single flow path concept. (author) figs., tabs., 26 refs
Portfolio Effects of Renewable Energies - Basics, Models, Exemplary Results
Energy Technology Data Exchange (ETDEWEB)
Wiese, Andreas; Herrmann, Matthias
2007-07-01
The combination of sites and technologies to so-called renewable energy portfolios, which are being developed and implemented under the same financing umbrella, is currently the subject of intense discussion in the finance world. The resulting portfolio effect may allow the prediction of a higher return with the same risk or the same return with a lower risk - always in comparison with the investment in a single project. Models are currently being developed to analyse this subject and derive the portfolio effect. In particular, the effect of the spatial distribution, as well as the effects of using different technologies, suppliers and cost assumptions with different level of uncertainties, are of importance. Wind parks, photovoltaic, biomass, biogas and hydropower are being considered. The status of the model development and first results are being presented in the current paper. In a first example, the portfolio effect has been calculated and analysed using selected parameters for a wind energy portfolio of 39 sites distributed over Europe. Consequently it has been shown that the predicted yield, with the predetermined probabilities between 75 to 90%, is 3 - 8% higher than the sum of the yields for the individual wind parks using the same probabilities. (auth)
Some exact results for the three-layer Zamolodchikov model
International Nuclear Information System (INIS)
Boos, H.E.; Mangazeev, V.V.
2001-01-01
In this paper we continue the study of the three-layer Zamolodchikov model started in our previous works (H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 3041-3054 and H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298). We analyse numerically the solutions to the Bethe ansatz equations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We consider two regimes I and II which differ by the signs of the spherical sides (a 1 ,a 2 ,a 3 )→(-a 1 ,-a 2 ,-a 3 ). We accept the two-line hypothesis for the regime I and the one-line hypothesis for the regime II. In the thermodynamic limit we derive integral equations for distribution densities and solve them exactly. We calculate the partition function for the three-layer Zamolodchikov model and check a compatibility of this result with the functional relations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We also do some numeric checkings of our results
Preliminary time-phased TWRS process model results
International Nuclear Information System (INIS)
Orme, R.M.
1995-01-01
This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995
Model reduction methods for vector autoregressive processes
Brüggemann, Ralf
2004-01-01
1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo sitions, have been developed over the years. The econometrics of VAR models and related quantities i...
Hybrid Modeling Method for a DEP Based Particle Manipulation
Directory of Open Access Journals (Sweden)
Mohamad Sawan
2013-01-01
Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Method of modeling the cognitive radio using Opnet Modeler
Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.
2012-01-01
This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model
Some results about the dating of pre hispanic mexican ceramics by the thermoluminescence method
International Nuclear Information System (INIS)
Gonzalez M, P.; Mendoza A, D.; Ramirez L, A.; Schaaf, P.
2004-01-01
One of the most frequently recurring questions in Archaeometry concerns the age of the studied objects. The some first dating methods were based in historical narrations, style of buildings manufacture techniques. However, has been observed that as consequence the continuous irradiation from naturally occurring radioisotopes and from cosmic rays some materials, such as archaeological ceramic, accumulate certain quantity of energy. These types of material can, in principle, be dated through the analysis of these accumulate energy. In that case, ceramic dating can be realized by thermoluminescence (TL) dating. In this work, results obtained by our research group about TL dating of ceramic belonging to several archaeological zones like to Edzna (Campeche), Calixtlahuaca and Teotenango (Mexico State) and Hervideros (Durango) are presented. The analysis was realized using the fine grained mode in a Daybreak model 1100 reader TL system. The radioisotopes that contribute in the accumulate annual dose in ceramic samples ( 40 K, 238 U, 232 Th) were determined by means of techniques such as Energy Dispersive X-ray Spectroscopy (EDS) and Neutron Activation Analysis (AAN). Our results are agree with results obtained through other methods. (Author) 7 refs., 2 tabs., 5 figs
Influence of Meibomian Gland Expression Methods on Human Lipid Analysis Results.
Kunnen, Carolina M E; Brown, Simon H J; Lazon de la Jara, Percy; Holden, Brien A; Blanksby, Stephen J; Mitchell, Todd W; Papas, Eric B
2016-01-01
To compare the lipid composition of human meibum across three different meibum expression techniques. Meibum was collected from five healthy non-contact lens wearers (aged 20-35 years) after cleaning the eyelid margin using three meibum expression methods: cotton buds (CB), meibomian gland evaluator (MGE) and meibomian gland forceps (MGF). Meibum was also collected using cotton buds without cleaning the eyelid margin (CBn). Lipids were analyzed by chip-based, nano-electrospray mass spectrometry (ESI-MS). Comparisons were made using linear mixed models. Tandem MS enabled identification and quantification of over 200 lipid species across ten lipid classes. There were significant differences between collection techniques in the relative quantities of polar lipids obtained (P<.05). The MGE method returned smaller polar lipid quantities than the CB approaches. No significant differences were found between techniques for nonpolar lipids. No significant differences were found between cleaned and non-cleaned eyelids for polar or nonpolar lipids. Meibum expression technique influences the relative amount of phospholipids in the resulting sample. The highest amounts of phospholipids were detected with the CB approaches and the lowest with the MGE technique. Cleaning the eyelid margin prior to expression was not found to affect the lipid composition of the sample. This may be a consequence of the more forceful expression resulting in cell membrane contamination or higher risk of tear lipid contamination as a result of reflex tearing. Copyright © 2016 Elsevier Inc. All rights reserved.
Steady-state transport equation resolution by particle methods, and numerical results
International Nuclear Information System (INIS)
Mercier, B.
1985-10-01
A method to solve steady-state transport equation has been given. Principles of the method are given. The method is studied in two different cases; estimations given by the theory are compared to numerical results. Results got in 1-D (spherical geometry) and in 2-D (axisymmetric geometry) are given [fr
a Modeling Method of Fluttering Leaves Based on Point Cloud
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Computational mathematics models, methods, and analysis with Matlab and MPI
White, Robert E
2004-01-01
Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...
Model of coupling with core in the Green function method
International Nuclear Information System (INIS)
Kamerdzhiev, S.P.; Tselyaev, V.I.
1983-01-01
Models of coupling with core in the method of the Green functions, presenting generalization of conventional method of chaotic phases, i.e. account of configurations of more complex than monoparticle-monohole (1p1h) configurations, have been considered. Odd nuclei are studied only to the extent when the task of odd nucleus is solved for even-even nucleus. Microscopic model of the account of delay effects in mass operator M=M(epsilon), which corresponds to the account of the effects influence only on the change of quasiparticle behaviour in magic nucleus as compared with their behaviour, described by pure model of cores, has been considered. The change results in fragmentation of monoparticle levels, which is the main effect, and in the necessity to use new basis as compared with the shell one, corresponding to inoculative quasiparticles. When formulas have been devived concrete type of mass operator M(epsilon) is not used
Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.
2004-03-01
In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).
The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions
Energy Technology Data Exchange (ETDEWEB)
Cognata, M. La; Pizzone, R. G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Spitaleri, C.; Cherubini, S.; Romano, S. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Gulino, M.; Tumino, A. [Kore University, Enna, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Lamia, L. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy)
2014-05-09
Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance.
The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions
International Nuclear Information System (INIS)
Cognata, M. La; Pizzone, R. G.; Spitaleri, C.; Cherubini, S.; Romano, S.; Gulino, M.; Tumino, A.; Lamia, L.
2014-01-01
Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Modelling lung cancer due to radon and smoking in WISMUT miners: Preliminary results
International Nuclear Information System (INIS)
Bijwaard, H.; Dekkers, F.; Van Dillen, T.
2011-01-01
A mechanistic two-stage carcinogenesis model has been applied to model lung-cancer mortality in the largest uranium-miner cohort available. Models with and without smoking action both fit the data well. As smoking information is largely missing from the cohort data, a method has been devised to project this information from a case-control study onto the cohort. Model calculations using 256 projections show that the method works well. Preliminary results show that if an explicit smoking action is absent in the model, this is compensated by the values of the baseline parameters. This indicates that in earlier studies performed without smoking information, the results obtained for the radiation parameters are still valid. More importantly, the inclusion of smoking-related parameters shows that these mainly influence the later stages of lung-cancer development. (authors)
Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.
2014-01-01
We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically
METHODICAL MODEL FOR TEACHING BASIC SKI TURN
Directory of Open Access Journals (Sweden)
Danijela Kuna
2013-07-01
Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Arctic curves in path models from the tangent method
Di Francesco, Philippe; Lapa, Matthew F.
2018-04-01
Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.
Directory of Open Access Journals (Sweden)
Malinowski Paweł
2014-12-01
Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.
Evaluation of internal noise methods for Hotelling observer models
International Nuclear Information System (INIS)
Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.
2007-01-01
The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Method for modeling post-mortem biometric 3D fingerprints
Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.
2016-05-01
Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
A New Method for a Virtue-Based Responsible Conduct of Research Curriculum: Pilot Test Results.
Berling, Eric; McLeskey, Chet; O'Rourke, Michael; Pennock, Robert T
2018-02-03
Drawing on Pennock's theory of scientific virtues, we are developing an alternative curriculum for training scientists in the responsible conduct of research (RCR) that emphasizes internal values rather than externally imposed rules. This approach focuses on the virtuous characteristics of scientists that lead to responsible and exemplary behavior. We have been pilot-testing one element of such a virtue-based approach to RCR training by conducting dialogue sessions, modeled upon the approach developed by Toolbox Dialogue Initiative, that focus on a specific virtue, e.g., curiosity and objectivity. During these structured discussions, small groups of scientists explore the roles they think the focus virtue plays and should play in the practice of science. Preliminary results have shown that participants strongly prefer this virtue-based model over traditional methods of RCR training. While we cannot yet definitively say that participation in these RCR sessions contributes to responsible conduct, these pilot results are encouraging and warrant continued development of this virtue-based approach to RCR training.
Dynamic spatial panels : models, methods, and inferences
Elhorst, J. Paul
This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent
Methods of Medical Guidelines Modelling in GLIF.
Czech Academy of Sciences Publication Activity Database
Buchtela, David; Anger, Z.; Peleška, Jan (ed.); Tomečková, Marie; Veselý, Arnošt; Zvárová, Jana
2005-01-01
Roč. 11, - (2005), s. 1529-1532 ISSN 1727-1983. [EMBEC'05. European Medical and Biomedical Conference /3./. Prague, 20.11.2005-25.11.2005] Institutional research plan: CEZ:AV0Z10300504 Keywords : medical guidelines * knowledge modelling * GLIF model Subject RIV: BD - Theory of Information
DEFF Research Database (Denmark)
Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo
2016-01-01
influence reports of cognitive decline. METHODS: The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded...... decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. CONCLUSIONS: Measures for subjective cognitive...... decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration...
Tapering of the CHESS-APS undulator: Results and modelling
International Nuclear Information System (INIS)
Lai, B.; Viccaro, P.J.; Dejus, R.; Gluskin, E.; Yun, W.B.; McNulty, I.; Henderson, C.; White, J.; Shen, Q.; Finkelstein, K.
1992-01-01
When the magnetic gap of an undulator is tapered along the beam direction, the slowly varying peak field B o introduces a spread in the value of the deflection parameter K. The result is a broad energy-band undulator that still maintains high degree of spatial collimation. These properties are very useful for EXAFS and energy dispersive techniques. We have characterized the CHESS-APS undulator (1 υ = 3.3cm) at one tapered configuration (10% change of the magnetic gap from one end of the undulator to the other). Spatial distribution and energy spectra of the first three harmonics through a pinhole were measured. The on-axis first harmonic width increased from 0.27 keV to 0.61 keV (FWHM) at the central energy of E 1 = 6.6 keV (K average = 0.69). Broadening in the angular distribution due to tapering was minimal. These results will be compared with computer modelling which simulates the actual electron trajectory in the tapered case
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
Reflexion on linear regression trip production modelling method for ensuring good model quality
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
International Nuclear Information System (INIS)
Gaines, Irwin; Qian Sijin
2001-01-01
This is an update of the report about an Object Oriented (OO) track reconstruction model, which was presented in the previous AIHENP'99 at Crete, Greece. The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. It has been coded in the C++ programming language and successfully implemented into a few different OO computing environments of the CMS and ATLAS experiments at the future Large Hadron Collider at CERN. We shall report: (1) more performance result: (2) implementing the OO model into the new SW OO framework 'Athena' of ATLAS experiment and some upgrades of the OO model itself
Numerical Modelling of the Special Light Source with Novel R-FEM Method
Directory of Open Access Journals (Sweden)
Pavel Fiala
2008-01-01
Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.
Modelling of tracer-kinetic results using xylene isomerization as an example
International Nuclear Information System (INIS)
Bauer, F.J.; Dermietzel, J.; Roesseler, M.; Koch, H.
1976-01-01
The analysis of results from differential or/and integral reactor experiments often admits the interpretation of a chemical reaction in several ways. In addition, the use of mathematical methods for the model selection and planning of experiments is rendered more difficult by great confidence intervals of the ascertained model parameters. The application of radioactively labelled molecules results in improving the knowledge of reaction mechanisms as well as the assessment of parameters obtained. This is shown on the basis of modelling the isomerization of xylene. (author)
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Directory of Open Access Journals (Sweden)
Jure Tuta
2018-03-01
Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
Demixing in a metal halide lamp, results from modelling
Beks, M.L.; Hartgers, A.; Mullen, van der J.J.A.M.
2006-01-01
Convection and diffusion in the discharge region of a metal halide lamp is studied using a computer model built with the plasma modeling package Plasimo. A model lamp contg. mercury and sodium iodide is studied. The effects of the total lamp pressure on the degree of segregation of the light
A Duality Result for the Generalized Erlang Risk Model
Directory of Open Access Journals (Sweden)
Lanpeng Ji
2014-11-01
Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.
Combining static and dynamic modelling methods: a comparison of four methods
Wieringa, Roelf J.
1995-01-01
A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current
A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods
Directory of Open Access Journals (Sweden)
Michael Amberg
1996-11-01
Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
Modelling a gamma irradiation process using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
Direct numerical methods of mathematical modeling in mechanical structural design
International Nuclear Information System (INIS)
Sahili, Jihad; Verchery, Georges; Ghaddar, Ahmad; Zoaeter, Mohamed
2002-01-01
Full text.Structural design and numerical methods are generally interactive; requiring optimization procedures as the structure is analyzed. This analysis leads to define some mathematical terms, as the stiffness matrix, which are resulting from the modeling and then used in numerical techniques during the dimensioning procedure. These techniques and many others involve the calculation of the generalized inverse of the stiffness matrix, called also the 'compliance matrix'. The aim of this paper is to introduce first, some different existing mathematical procedures, used to calculate the compliance matrix from the stiffness matrix, then apply direct numerical methods to solve the obtained system with the lowest computational time, and to compare the obtained results. The results show a big difference of the computational time between the different procedures
Modelling a gamma irradiation process using the Monte Carlo method
International Nuclear Information System (INIS)
Soares, Gabriela A.; Pereira, Marcio T.
2011-01-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
Waste glass corrosion modeling: Comparison with experimental results
International Nuclear Information System (INIS)
Bourcier, W.L.
1994-01-01
Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Soybean yield modeling using bootstrap methods for small samples
Energy Technology Data Exchange (ETDEWEB)
Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.
2016-11-01
One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)
A hierarchical network modeling method for railway tunnels safety assessment
Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin
2017-02-01
Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.
Argonne Fuel Cycle Facility ventilation system -- modeling and results
International Nuclear Information System (INIS)
Mohr, D.; Feldman, E.E.; Danielson, W.F.
1995-01-01
This paper describes an integrated study of the Argonne-West Fuel Cycle Facility (FCF) interconnected ventilation systems during various operations. Analyses and test results include first a nominal condition reflecting balanced pressures and flows followed by several infrequent and off-normal scenarios. This effort is the first study of the FCF ventilation systems as an integrated network wherein the hydraulic effects of all major air systems have been analyzed and tested. The FCF building consists of many interconnected regions in which nuclear fuel is handled, transported and reprocessed. The ventilation systems comprise a large number of ducts, fans, dampers, and filters which together must provide clean, properly conditioned air to the worker occupied spaces of the facility while preventing the spread of airborne radioactive materials to clean am-as or the atmosphere. This objective is achieved by keeping the FCF building at a partial vacuum in which the contaminated areas are kept at lower pressures than the other worker occupied spaces. The ventilation systems of FCF and the EBR-II reactor are analyzed as an integrated totality, as demonstrated. We then developed the network model shown in Fig. 2 for the TORAC code. The scope of this study was to assess the measured results from the acceptance/flow balancing testing and to predict the effects of power failures, hatch and door openings, single-failure faulted conditions, EBR-II isolation, and other infrequent operations. The studies show that the FCF ventilation systems am very controllable and remain stable following off-normal events. In addition, the FCF ventilation system complex is essentially immune to reverse flows and spread of contamination to clean areas during normal and off-normal operation
Final model independent result of DAMA/LIBRA-phase1
Energy Technology Data Exchange (ETDEWEB)
Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)
2013-12-15
The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)
Innovation ecosystem model for commercialization of research results
Directory of Open Access Journals (Sweden)
Vlăduţ Gabriel
2017-07-01
Full Text Available Innovation means Creativity and Added value recognise by the market. The first step in creating a sustainable commercialization of research results, Technological Transfer – TT mechanism, on one hand is to define the “technology” which will be transferred and on other hand to define the context in which the TT mechanism work, the ecosystem. The focus must be set on technology as an entity, not as a science or a study of the practical industrial arts and certainly not any specific applied science. The transfer object, the technology, must rely on a subjectively determined but specifiable set of processes and products. Focusing on the product is not sufficient to the transfer and diffusion of technology. It is not merely the product that is transferred but also knowledge of its use and application. The innovation ecosystem model brings together new companies, experienced business leaders, researchers, government officials, established technology companies, and investors. This environment provides those new companies with a wealth of technical expertise, business experience, and access to capital that supports innovation in the early stages of growth.
Investigating the performance of directional boundary layer model through staged modeling method
Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee
2011-04-01
BLM since the feasibility of the BLM has been investigated in many papers[4][5][6]. Instead of fitting the parameters to the wafer critical dimensions (CD) directly, we tried to use the aerial image (AI) from the rigorous simulator with the electromagnetic field (EMF) solver. Usually that kind of method is known as the staged modeling method. To see the advantages of this method we conducted several experiments and observed the results comparing the method of fitting to the wafer CD directly. Through the tests we could observe some remarkable results and confirmed that the staged modeling had better performance in many ways.
Optimization Method of Fusing Model Tree into Partial Least Squares
Directory of Open Access Journals (Sweden)
Yu Fang
2017-01-01
Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.
Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo; Waldemar, Gunhild
2016-09-01
Cognitive complaints occur frequently in elderly people and may be a risk factor for dementia and cognitive decline. Results from studies on subjective cognitive decline are difficult to compare due to variability in assessment methods, and little is known about how different methods influence reports of cognitive decline. The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded to results from the other scale. Scales were not used for diagnostic classification. Cognitive performances and depressive symptoms were also rated. We studied the association between the two measures and investigated the scales' relation to depressive symptoms, age, and cognitive status. SMC and MAC-Q were significantly associated (r = 0.44, N = 121, p = 0.015) and both scales had a wide range of scores. In this mixed cohort of patients, younger age was associated with higher SMC scores. There were no significant correlations between cognitive test performances and scales measuring subjective decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. Measures for subjective cognitive decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration should be taken as to which questions are relevant and have validity when operationalizing subjective cognitive decline.
Multiscale modeling of porous ceramics using movable cellular automaton method
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Statistical Method to Overcome Overfitting Issue in Rational Function Models
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
A Method to Identify Flight Obstacles on Digital Surface Model
Institute of Scientific and Technical Information of China (English)
ZHAO Min; LIN Xinggang; SUN Shouyu; WANG Youzhi
2005-01-01
In modern low-altitude terrain-following guidance, a constructing method of the digital surface model (DSM) is presented in the paper to reduce the threat to flying vehicles of tall surface features for safe flight. The relationship between an isolated obstacle size and the intervals of vertical- and cross-section in the DSM model is established. The definition and classification of isolated obstacles are proposed, and a method for determining such isolated obstacles in the DSM model is given. The simulation of a typical urban district shows that when the vertical- and cross-section DSM intervals are between 3 m and 25 m, the threat to terrain-following flight at low-altitude is reduced greatly, and the amount of data required by the DSM model for monitoring in real time a flying vehicle is also smaller. Experiments show that the optimal results are for an interval of 12.5 m in the vertical- and cross-sections in the DSM model, with a 1:10 000 DSM scale grade.
A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series
Directory of Open Access Journals (Sweden)
Fernando Luiz Cyrino Oliveira
2014-01-01
Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.
Finite-element method modeling of hyper-frequency structures
International Nuclear Information System (INIS)
Zhang, Min
1990-01-01
The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr
Results from the Savannah River Laboratory model validation workshop
International Nuclear Information System (INIS)
Pepper, D.W.
1981-01-01
To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations
Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.
2015-01-01
The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for
Waste glass corrosion modeling: Comparison with experimental results
International Nuclear Information System (INIS)
Bourcier, W.L.
1993-11-01
A chemical model of glass corrosion will be used to predict the rates of release of radionuclides from borosilicate glass waste forms in high-level waste repositories. The model will be used both to calculate the rate of degradation of the glass, and also to predict the effects of chemical interactions between the glass and repository materials such as spent fuel, canister and container materials, backfill, cements, grouts, and others. Coupling between the degradation processes affecting all these materials is expected. Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations
Regionalization of climate model results for the North Sea
Energy Technology Data Exchange (ETDEWEB)
Kauker, F.
1999-07-01
A dynamical downscaling is presented that allows an estimation of potential effects of climate change on the North Sea. Therefore, the ocean general circulation model OPYC is adapted for application on a shelf by adding a lateral boundary formulation and a tide model. In this set-up the model is forced, first, with data from the ECMWF reanalysis for model validation and the study of the natural variability, and, second, with data from climate change experiments to estimate the effects of climate change on the North Sea. (orig.)
Accurate Electromagnetic Modeling Methods for Integrated Circuits
Sheng, Z.
2010-01-01
The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on
Reduced Order Modeling Methods for Turbomachinery Design
2009-03-01
and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En
Introduction to mathematical models and methods
Energy Technology Data Exchange (ETDEWEB)
Siddiqi, A. H.; Manchanda, P. [Gautam Budha University, Gautam Budh Nagar-201310 (India); Department of Mathematics, Guru Nanak Dev University, Amritsar (India)
2012-07-17
Some well known mathematical models in the form of partial differential equations representing real world systems are introduced along with fundamental concepts of Image Processing. Notions such as seismic texture, seismic attributes, core data, well logging, seismic tomography and reservoirs simulation are discussed.
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Directory of Open Access Journals (Sweden)
Salabura Piotr
2017-01-01
Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.
Spinal cord stimulation: modeling results and clinical data
Struijk, Johannes J.; Struijk, J.J.; Holsheimer, J.; Barolat, Giancarlo; He, Jiping
1992-01-01
The potential distribution in volume couductor models of the spinal cord at cervical, midthoracic and lowthoracic levels, due to epidural stimulation, was calculated. Treshold stimuli of modeled myelhated dorsal column and dorsal root fibers were calculated and were compared with perception
Urban traffic noise assessment by combining measurement and model results
Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.
2013-01-01
A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,
Noise and dose modeling for pediatric CT optimization: preliminary results
International Nuclear Information System (INIS)
Miller Clemente, Rafael A.; Perez Diaz, Marlen; Mora Reyes, Yudel; Rodriguez Garlobo, Maikel; Castillo Salazar, Rafael
2008-01-01
Full text: A Multiple Linear Regression Model was developed to predict noise and dose in computed tomography pediatric imaging for head and abdominal examinations. Relative values of Noise and Volumetric Computed Tomography Dose Index was used to estimate de model respectively. 54 images of physical phantoms were performed. Independent variables considered included: phantom diameter, tube current and kilovolts, x ray beam collimation, reconstruction diameter and equipment's post processing filters. Predicted values show good agreement with measurements, which were better in noise model (R 2 adjusted =0.953) than the dose model (R 2 adjusted =0.744). Tube current, object diameter, beam collimation and reconstruction filter were identified as the most influencing factors in models. (author)
Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins
Tschirhart, Hugo; Platini, Thierry
2018-05-01
In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.
Mathematical modeling of ignition of woodlands resulted from accident on the pipeline
Perminov, V. A.; Loboda, E. L.; Reyno, V. V.
2014-11-01
Accidents occurring at the sites of pipelines, accompanied by environmental damage, economic loss, and sometimes loss of life. In this paper we calculated the sizes of the possible ignition zones in emergency situations on pipelines located close to the forest, accompanied by the appearance of fireballs. In this paper, using the method of mathematical modeling calculates the maximum size of the ignition zones of vegetation as a result of accidental releases of flammable substances. The paper suggested in the context of the general mathematical model of forest fires give a new mathematical setting and method of numerical solution of a problem of a forest fire modeling. The boundary-value problem is solved numerically using the method of splitting according to physical processes. The dependences of the size of the forest fuel for different amounts of leaked flammable substances and moisture content of vegetation.
3D virtual human rapid modeling method based on top-down modeling mechanism
Directory of Open Access Journals (Sweden)
LI Taotao
2017-01-01
Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.
The Method for Assessing and Forecasting Value of Knowledge in SMEs – Research Results
Directory of Open Access Journals (Sweden)
Justyna Patalas-Maliszewska
2010-10-01
Full Text Available Decisions by SMEs regarding knowledge development are made at a strategic level (Haas-Edersheim, 2007. Related to knowledge management are approaches to "measure" knowledge, where literature distinguishes between qualitative and quantitative methods of valuating intellectual capital. Although there is a quite range of such methods to build an intellectual capital reporting system, none of them is really widely recognized. This work presents a method enabling assessing the effectiveness of investing in human resources, taking into consideration existing methods. The method presented is focusing on SMEs (taking into consideration their importance for, especially, regional development. It consists of four parts: an SME reference model, an indicator matrix to assess investments into knowledge, innovation indicators, and the GMDH algorithm for decision making. The method presented is exemplified by a case study including 10 companies.
The anchors of steel wire ropes, testing methods and their results
Directory of Open Access Journals (Sweden)
J. Krešák
2012-10-01
Full Text Available The present paper introduces an application of the acoustic and thermographic method in the defectoscopic testing of immobile steel wire ropes at the most critical point, the anchor. First measurements and their results by these new defectoscopic methods are shown. In defectoscopic tests at the anchor, the widely used magnetic method gives unreliable results, and therefore presents a problem for steel wire defectoscopy. Application of the two new methods in the steel wire defectoscopy at the anchor point will enable increased safety measures at the anchor of steel wire ropes in bridge, roof, tower and aerial cable lift constructions.
Directory of Open Access Journals (Sweden)
Viswanathan Arunachalam
2013-01-01
Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
Meteorological Uncertainty of atmospheric Dispersion model results (MUD)
DEFF Research Database (Denmark)
Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario....
Verification of Simulation Results Using Scale Model Flight Test Trajectories
National Research Council Canada - National Science Library
Obermark, Jeff
2004-01-01
.... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...
Box photosynthesis modeling results for WRF/CMAQ LSM
U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Some Econometric Results for the Blanchard-Watson Bubble Model
DEFF Research Database (Denmark)
Johansen, Soren; Lange, Theis
The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...
The 3D Reference Earth Model: Status and Preliminary Results
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the
Continual integration method in the polaron model
International Nuclear Information System (INIS)
Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.
1981-01-01
The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru
The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method
Directory of Open Access Journals (Sweden)
Dewei Zhang
2014-01-01
Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.
The animal model determines the results of Aeromonas virulence factors
Directory of Open Access Journals (Sweden)
Alejandro Romero
2016-10-01
Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study
International Nuclear Information System (INIS)
Puncher, M.; Birchall, A.; Bull, R. K.
2012-01-01
Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
Semi-Lagrangian methods in air pollution models
Directory of Open Access Journals (Sweden)
A. B. Hansen
2011-06-01
Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.
The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.
Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.
All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.
The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.
The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme
Justification of the concept of mathematical methods and models in making decisions on taxation
KORKUNA NATALIA MIKHAYLOVNA
2017-01-01
The paper presents the concept of the application of mathematical methods and models in making decisions on taxation in Ukraine as a phased process. Its performance result is the selection of an effective decision based on regression and optimization models.
"Method, system and storage medium for generating virtual brick models"
DEFF Research Database (Denmark)
2009-01-01
An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...
Recent shell-model results for exotic nuclei
Directory of Open Access Journals (Sweden)
Utsuno Yusuke
2014-03-01
Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.
Laser filamentation mathematical methods and models
Lorin, Emmanuel; Moloney, Jerome
2016-01-01
This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...
Models and methods of emotional concordance.
Hollenstein, Tom; Lanteigne, Dianna
2014-04-01
Theories of emotion generally posit the synchronized, coordinated, and/or emergent combination of psychophysiological, cognitive, and behavioral components of the emotion system--emotional concordance--as a functional definition of emotion. However, the empirical support for this claim has been weak or inconsistent. As an introduction to this special issue on emotional concordance, we consider three domains of explanations as to why this theory-data gap might exist. First, theory may need to be revised to more accurately reflect past research. Second, there may be moderating factors such as emotion regulation, context, or individual differences that have obscured concordance. Finally, the methods typically used to test theory may be inadequate. In particular, we review a variety of potential issues: intensity of emotions elicited in the laboratory, nonlinearity, between- versus within-subject associations, the relative timing of components, bivariate versus multivariate approaches, and diversity of physiological processes. Copyright © 2013 Elsevier B.V. All rights reserved.
Theoretical methods and models for mechanical properties of soft biomaterials
Directory of Open Access Journals (Sweden)
Zhonggang Feng
2017-06-01
Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.
Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering
1996-05-01
Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.
Energy Technology Data Exchange (ETDEWEB)
Sato, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center
1997-05-27
Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.
International Nuclear Information System (INIS)
Wu Shengxing; Chen Xudong; Zhou Jikai
2012-01-01
Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.
Non-Destructive Evaluation Method Based On Dynamic Invariant Stress Resultants
Directory of Open Access Journals (Sweden)
Zhang Junchi
2015-01-01
Full Text Available Most of the vibration based damage detection methods are based on changes in frequencies, mode shapes, mode shape curvature, and flexibilities. These methods are limited and typically can only detect the presence and location of damage. Current methods seldom can identify the exact severity of damage to structures. This paper will present research in the development of a new non-destructive evaluation method to identify the existence, location, and severity of damage for structural systems. The method utilizes the concept of invariant stress resultants (ISR. The basic concept of ISR is that at any given cross section the resultant internal force distribution in a structural member is not affected by the inflicted damage. The method utilizes dynamic analysis of the structure to simulate direct measurements of acceleration, velocity and displacement simultaneously. The proposed dynamic ISR method is developed and utilized to detect the damage of corresponding changes in mass, damping and stiffness. The objectives of this research are to develop the basic theory of the dynamic ISR method, apply it to the specific types of structures, and verify the accuracy of the developed theory. Numerical results that demonstrate the application of the method will reflect the advanced sensitivity and accuracy in characterizing multiple damage locations.
A model for hot electron phenomena: Theory and general results
International Nuclear Information System (INIS)
Carrillo, J.L.; Rodriquez, M.A.
1988-10-01
We propose a model for the description of the hot electron phenomena in semiconductors. Based on this model we are able to reproduce accurately the main characteristics observed in experiments of electric field transport, optical absorption, steady state photoluminescence and relaxation process. Our theory does not contain free nor adjustable parameters, it is very fast computerwise, and incorporates the main collision mechanisms including screening and phonon heating effects. Our description on a set of nonlinear rate equations in which the interactions are represented by coupling coefficients or effective frequencies. We calculate three coefficients from the characteristic constants and the band structure of the material. (author). 22 refs, 5 figs, 1 tab
Results from Development of Model Specifications for Multifamily Energy Retrofits
Energy Technology Data Exchange (ETDEWEB)
Brozyna, K.
2012-08-01
Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.
Results From Development of Model Specifications for Multifamily Energy Retrofits
Energy Technology Data Exchange (ETDEWEB)
Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)
2012-08-01
Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.
Using the QUAIT Model to Effectively Teach Research Methods Curriculum to Master's-Level Students
Hamilton, Nancy J.; Gitchel, Dent
2017-01-01
Purpose: To apply Slavin's model of effective instruction to teaching research methods to master's-level students. Methods: Barriers to the scientist-practitioner model (student research experience, confidence, and utility value pertaining to research methods as well as faculty research and pedagogical incompetencies) are discussed. Results: The…
Monitoring ambient ozone with a passive measurement technique method, field results and strategy
Scheeren, BA; Adema, EH
1996-01-01
A low-cost, accurate and sensitive passive measurement method for ozone has been developed and tested. The method is based on the reaction of ozone with indigo carmine which results in colourless reaction products which are detected spectrophotometrically after exposure. Coated glass filters are
Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis
1999-01-01
To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...
Evaluation of the effects of green taxes in the Nordic countries. Results and method question
International Nuclear Information System (INIS)
Skou Andersen, M.; Dengsoee, N.; Branth Pedersen, A.
2000-01-01
Green taxes have over the past 10 years become a significant part of environmental regulation in the Nordic countries. The present report is a literature study of the effects of green taxes with regard to CO 2 and pesticides. The authors have identified 68 studies of CO 2 -taxes and 20 studies of the pesticide taxes. The report presents a summary of the results from these studies and assesses the methodologies employed for examining the effects of the green taxes. The majority of the reviewed studies are ex-ante studies, which have been carried out in advance of the implementation of the taxes, and which are often based on simplified economic models. Ex-post studies, which are based on the actual historical data for the adjustment to the taxes, are relatively few. 20 ex-post studies of the CO 2 -taxes have been identified, while there are not any ex-post studies of the pesticide taxes. With regard to the environmental effects of green taxes, the ex-post studies can be relied on for the procurement of the most reliable data. The completed ex-post studies of the CO 2 -taxes do not present unambiguous results, because focus and methodology differ. Most studies are partial in their focus and relate to one or more sectors of the economy. Some studies have been carried out few tears after the introduction of the taxes, and do not present an updated assessment of the effects of the taxes. To the extent that it is possible to summarise the present knowledge about the effects of the CO 2 -taxes, there seems to be indications for relatively marked effects in Denmark as compared to the other Nordic countries, since Denmark is the only country whose taxed CO 2 -emissions have been reduced in absolute figures. With regard to Norway and Sweden, effects of the CO 2 -taxes can be identified in particular sectors in relation to business-as-usual scenarios. Finland's CO 2 -tax has not been comprehensively evaluated ex-post, but has reached a tax level which gives expectations of
A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains
Directory of Open Access Journals (Sweden)
Kemin Wang
2014-01-01
Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.
Analytical results for the Sznajd model of opinion formation
Czech Academy of Sciences Publication Activity Database
Slanina, František; Lavička, H.
2003-01-01
Roč. 35, - (2003), s. 279-288 ISSN 1434-6028 R&D Projects: GA ČR GA202/01/1091 Institutional research plan: CEZ:AV0Z1010914 Keywords : agent models * sociophysics Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2003
Meteorological Uncertainty of atmospheric Dispersion model results (MUD)
DEFF Research Database (Denmark)
Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely’ di...
Some Results On The Modelling Of TSS Manufacturing Lines
Directory of Open Access Journals (Sweden)
Viorel MÎNZU
2000-12-01
Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.
Some rigorous results on the Hopfield neural network model
International Nuclear Information System (INIS)
Koch, H.; Piasko, J.
1989-01-01
The authors analyze the thermal equilibrium distribution of 2 p mean field variables for the Hopfield model with p stored patterns, in the case where 2 p is small compared to the number of spins. In particular, they give a full description of the free energy density in the thermodynamic limit, and of the so-called symmetric solutions for the mean field equations
Directory of Open Access Journals (Sweden)
Kaushikbhai C. Parmar
2017-04-01
Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.
International Nuclear Information System (INIS)
Taillade, Frédéric; Dumont, Eric; Belin, Etienne
2008-01-01
We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty
Project Deep Drilling KLX02 - Phase 2. Methods, scope of activities and results. Summary report
International Nuclear Information System (INIS)
Ekman, L.
2001-04-01
Geoscientific investigations performed by SKB, including those at the Aespoe Hard Rock Laboratory, have so far comprised the bedrock horizon down to about 1000 m. The primary purposes with the c. 1700 m deep, φ76 mm, sub vertical core borehole KLX02, drilled during the autumn 1992 at Laxemar, Oskarshamn, was to test core drilling technique at large depths and with a relatively large diameter and to enable geoscientific investigations beyond 1000 m. Drilling of borehole KLX02 was fulfilled very successfully. Results of the drilling commission and the borehole investigations conducted in conjunction with drilling have been reported earlier. The present report provides a summary of the investigations made during a five year period after completion of drilling. Results as well as methods applied are described. A variety of geoscientific investigations to depths exceeding 1600 m were successfully performed. However, the investigations were not entirely problem-free. For example, borehole equipment got stuck in the borehole at several occasions. Special investigations, among them a fracture study, were initiated in order to reveal the mechanisms behind this problem. Different explanations seem possible, e.g. breakouts from the borehole wall, which may be a specific problem related to the stress situation in deep boreholes. The investigation approach for borehole KLX02 followed, in general outline, the SKB model for site investigations, where a number of key issues for site characterization are studied. For each of those, a number of geoscientific parameters are investigated and determined. One important aim is to erect a lithological-structural model of the site, which constitutes the basic requirement for modelling mechanical stability, thermal properties, groundwater flow, groundwater chemistry and transport of solutes. The investigations in borehole KLX02 resulted in a thorough lithological-structural characterization of the rock volume near the borehole. In order to
Project Deep Drilling KLX02 - Phase 2. Methods, scope of activities and results. Summary report
Energy Technology Data Exchange (ETDEWEB)
Ekman, L. [GEOSIGMA AB/LE Geokonsult AB, Uppsala (Sweden)
2001-04-01
Geoscientific investigations performed by SKB, including those at the Aespoe Hard Rock Laboratory, have so far comprised the bedrock horizon down to about 1000 m. The primary purposes with the c. 1700 m deep, {phi}76 mm, sub vertical core borehole KLX02, drilled during the autumn 1992 at Laxemar, Oskarshamn, was to test core drilling technique at large depths and with a relatively large diameter and to enable geoscientific investigations beyond 1000 m. Drilling of borehole KLX02 was fulfilled very successfully. Results of the drilling commission and the borehole investigations conducted in conjunction with drilling have been reported earlier. The present report provides a summary of the investigations made during a five year period after completion of drilling. Results as well as methods applied are described. A variety of geoscientific investigations to depths exceeding 1600 m were successfully performed. However, the investigations were not entirely problem-free. For example, borehole equipment got stuck in the borehole at several occasions. Special investigations, among them a fracture study, were initiated in order to reveal the mechanisms behind this problem. Different explanations seem possible, e.g. breakouts from the borehole wall, which may be a specific problem related to the stress situation in deep boreholes. The investigation approach for borehole KLX02 followed, in general outline, the SKB model for site investigations, where a number of key issues for site characterization are studied. For each of those, a number of geoscientific parameters are investigated and determined. One important aim is to erect a lithological-structural model of the site, which constitutes the basic requirement for modelling mechanical stability, thermal properties, groundwater flow, groundwater chemistry and transport of solutes. The investigations in borehole KLX02 resulted in a thorough lithological-structural characterization of the rock volume near the borehole. In order
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Modeling shallow water flows using the discontinuous Galerkin method
Khan, Abdul A
2014-01-01
Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Energy Technology Data Exchange (ETDEWEB)
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Automatic Tree Data Removal Method for Topography Measurement Result Using Terrestrial Laser Scanner
Yokoyama, H.; Chikatsu, H.
2017-02-01
Recently, laser scanning has been receiving greater attention as a useful tool for real-time 3D data acquisition, and various applications such as city modelling, DTM generation and 3D modelling of cultural heritage sites have been proposed. And, former digital data processing were demanded in the past digital archive techniques for cultural heritage sites. However, robust filtering method for distinguishing on- and off-terrain points by terrestrial laser scanner still have many issues. In the past investigation, former digital data processing using air-bone laser scanner were reported. Though, efficient tree removal methods from terrain points for the cultural heritage are not considered. In this paper, authors describe a new robust filtering method for cultural heritage using terrestrial laser scanner with "the echo digital processing technology" as latest data processing techniques of terrestrial laser scanner.
Regionalization of climate model results for the North Sea
Energy Technology Data Exchange (ETDEWEB)
Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2000-07-01
A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and
Guiding center model to interpret neutral particle analyzer results
Englert, G. W.; Reinmann, J. J.; Lauver, M. R.
1974-01-01
The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically precribed. The flux into the neutral particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius.
International Nuclear Information System (INIS)
BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.
2002-01-01
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.
2004-01-01
The method of characteristics is well known for its slow convergence; consequently, as it is often done for SN methods, the Generalized Minimal Residual approach (GMRES) has been investigated for its practical implementation and its high reliability. GMRES is one of the most effective Krylov iterative methods to solve large linear systems. Moreover, the system has been 'left preconditioned' with the Algebraic Collapsing Acceleration (ACA) a variant of the Diffusion Synthetic Acceleration (DSA) based on I. Suslov's former works. This paper presents the first numerical results of these methods in 2D geometries with material discontinuities. Indeed, previous investigations have shown a degraded effectiveness of Diffusion Synthetic Accelerations with this kind of geometries. Results are presented for 9 x 9 Cartesian assemblies in terms of the speed of convergence of the inner iterations (fixed source) of the method of characteristics. It shows a significant improvement on the convergence rate. (authors)
Energy Technology Data Exchange (ETDEWEB)
BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.
2002-06-03
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.
Ill Posedness Results for Generalized Water Wave Models
Teyekpiti, Vincent Tetteh
2013-01-01
In the first part of the study, the weak asymptotic method is used to find singular solutions of the shallow water system in both one and two space dimensions. The singular solutions so constructed are allowed to contain Dirac-delta; distributions (Espinosa & Omel'yanov, 2005). The idea is to con- struct complex-valued approximate solutions which become real-valued in the distributional limit. The approach, which extends the range f possible singular solutions, is used to construct solutions ...
Considerations on Modeling Strategies of the Financial Result
Directory of Open Access Journals (Sweden)
Lucian Cernuşca
2012-12-01
Full Text Available This study's objective is to highlight some of the strategies to maximize or minimize the accounting result, situated un-der the impulse of bad accounting. Although we assist the manipulation of the accounting result, this procedure is done according to the law, been exploited by some entities in knowledge of the lack of justice and accounting regulations.
A variation method in the optimization problem of the minority game model
International Nuclear Information System (INIS)
Blazhyijevs'kij, L.; Yanyishevs'kij, V.
2009-01-01
This article contains the results of applying a variation method in the investigation of the optimization problem in the minority game model. That suggested approach is shown to give relevant results about phase transition in the model. Other methods pertinent to the problem have also been assessed.
International Nuclear Information System (INIS)
Geroyannis, V.S.
1990-01-01
In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
The physical model of a terraced plot: first results
Perlotto, Chiara; D'Agostino, Vincenzo; Buzzanca, Giacomo
2017-04-01
Terrace building have been expanded in the 19th century because of the increased demographic pressure and the need to crop additional areas at steeper slopes. Terraces are also important to regulate the hydrological behavior of the hillslope. Few studies are available in literature on rainfall-runoff processes and flood risk mitigation in terraced areas. Bench terraces, reducing the terrain slope and the length of the overland flow, quantitatively control the runoff flow velocity, facilitating the drainage and thus leading to a reduction of soil erosion. The study of the hydrologic-hydraulic function of terraced slopes is essential in order to evaluate their possible use to cooperate for flood-risk mitigation also preserving the landscape value. This research aims to better focus the times of the hydrological response, which are determined by a hillslope plot bounded by a dry-stone wall, considering both the overland flow and the groundwater. A physical model, characterized by a quasi-real scale, has been built to reproduce the behavior of a 3% outward sloped terrace at bare soil condition. The model consists of a steel metal box (1 m large, 3.3 m long, 2 m high) containing the hillslope terrain. The terrain is equipped with two piezometers, 9 TDR sensors measuring the volumetric water content, a surface spillway at the head releasing the steady discharge under test, a scale at the wall base to measure the outflowing discharge. The experiments deal with different initial moisture condition (non-saturated and saturated), and discharges of 19.5, 12.0 and 5.0 l/min. Each experiment has been replicated, conducting a total number of 12 tests. The volumetric water content analysis produced by the 9 TDR sensors was able to provide a quite satisfactory representation of the soil moisture during the runs. Then, different lag times at the outlet since the inflow initiation were measured both for runoff and groundwater. Moreover, the time of depletion and the piezometer
A result-driven minimum blocking method for PageRank parallel computing
Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan
2017-01-01
Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.
Modeling of NiTiHf using finite difference method
Farjam, Nazanin; Mehrabi, Reza; Karaca, Haluk; Mirzaeifar, Reza; Elahinia, Mohammad
2018-03-01
NiTiHf is a high temperature and high strength shape memory alloy with transformation temperatures above 100oC. A constitutive model based on Gibbs free energy is developed to predict the behavior of this material. Two different irrecoverable strains including transformation induced plastic strain (TRIP) and viscoplastic strain (VP) are considered when using high temperature shape memory alloys (HTSMAs). The first one happens during transformation at high levels of stress and the second one is related to the creep which is rate-dependent. The developed model is implemented for NiTiHf under uniaxial loading. Finite difference method is utilized to solve the proposed equations. The material parameters in the equations are calibrated from experimental data. Simulation results are captured to investigate the superelastic behavior of NiTiHf. The extracted results are compared with experimental tests of isobaric heating and cooling at different levels of stress and also superelastic tests at different levels of temperature. More results are generated to investigate the capability of the proposed model in the prediction of the irrecoverable strain after full transformation in HTSMAs.
SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS
Directory of Open Access Journals (Sweden)
Tetiana D. Panchenko
2016-07-01
Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.
Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K
2017-11-01
Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Energy Technology Data Exchange (ETDEWEB)
Prinn, Ronald [MIT; Webster, Mort [MIT
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Comparative analysis of various methods for modelling permanent magnet machines
Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.
2017-01-01
In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air
Dynamic airspace configuration method based on a weighted graph model
Directory of Open Access Journals (Sweden)
Chen Yangzhou
2014-08-01
Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Analysis of inelastic neutron scattering results on model compounds ...
Indian Academy of Sciences (India)
Vibrational spectroscopy; nitrogenous bases; inelastic neutron scattering. PACS No. ... obtain good quality, high resolution results in this region. Here the .... knowledge of the character of each molecular transition as well as the calculated.
MCNP Modeling Results for Location of Buried TRU Waste Drums
International Nuclear Information System (INIS)
Steinman, D K; Schweitzer, J S
2006-01-01
In the 1960's, fifty-five gallon drums of TRU waste were buried in shallow pits on remote U.S. Government facilities such as the Idaho National Engineering Laboratory (now split into the Idaho National Laboratory and the Idaho Completion Project [ICP]). Subsequently, it was decided to remove the drums and the material that was in them from the burial pits and send the material to the Waste Isolation Pilot Plant in New Mexico. Several technologies have been tried to locate the drums non-intrusively with enough precision to minimize the chance for material to be spread into the environment. One of these technologies is the placement of steel probe holes in the pits into which wireline logging probes can be lowered to measure properties and concentrations of material surrounding the probe holes for evidence of TRU material. There is also a concern that large quantities of volatile organic compounds (VOC) are also present that would contaminate the environment during removal. In 2001, the Idaho National Engineering and Environmental Laboratory (INEEL) built two pulsed neutron wireline logging tools to measure TRU and VOC around the probe holes. The tools are the Prompt Fission Neutron (PFN) and the Pulsed Neutron Gamma (PNG), respectively. They were tested experimentally in surrogate test holes in 2003. The work reported here estimates the performance of the tools using Monte-Carlo modelling prior to field deployment. A MCNP model was constructed by INEEL personnel. It was modified by the authors to assess the ability of the tools to predict quantitatively the position and concentration of TRU and VOC materials disposed around the probe holes. The model was used to simulate the tools scanning the probe holes vertically in five centimetre increments. A drum was included in the model that could be placed near the probe hole and at other locations out to forty-five centimetres from the probe-hole in five centimetre increments. Scans were performed with no chlorine in the
Solar activity variations of ionosonde measurements and modeling results
Czech Academy of Sciences Publication Activity Database
Altadill, D.; Arrazola, D.; Blanch, E.; Burešová, Dalia
2008-01-01
Roč. 42, č. 4 (2008), s. 610-616 ISSN 0273-1177 R&D Projects: GA AV ČR 1QS300120506 Grant - others:MCYT(ES) REN2003-08376-C02-02; CSIC(XE) 2004CZ0002; AGAUR(XE) 2006BE00112; AF Research Laboratory(XE) FA8718-L-0072 Institutional research plan: CEZ:AV0Z30420517 Keywords : mid-latitude ionosphere * bottomside modeling * ionospheric variability Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.860, year: 2008 http://www.sciencedirect.com/science/journal/02731177
NASA Air Force Cost Model (NAFCOM): Capabilities and Results
McAfee, Julie; Culver, George; Naderi, Mahmoud
2011-01-01
NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.
The calculation of exchange forces: General results and specific models
International Nuclear Information System (INIS)
Scott, T.C.; Babb, J.F.; Dalgarno, A.; Morgan, J.D. III
1993-01-01
In order to clarify questions about the calculation of the exchange energy of a homonuclear molecular ion, an analysis is carried out of a model problem consisting of the one-dimensional limit of H 2 + . It is demonstrated that the use of the infinite polarization expansion for the localized wave function in the Holstein--Herring formula yields an approximate exchange energy which at large internuclear distances R has the correct leading behavior to O(e -R ) and is close to but not equal to the exact exchange energy. The extension to the n-dimensional double-well problem is presented
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Advanced methods of solid oxide fuel cell modeling
Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi
2011-01-01
Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
A method for data handling numerical results in parallel OpenFOAM simulations
International Nuclear Information System (INIS)
nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian
2015-01-01
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms
A method for data handling numerical results in parallel OpenFOAM simulations
Energy Technology Data Exchange (ETDEWEB)
Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Guiding center model to interpret neutral particle analyzer results
International Nuclear Information System (INIS)
Englert, G.W.; Reinmann, J.J.; Lauver, M.R.
1974-01-01
The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically prescribed. The flux into the neutron particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius. (U.S.)
Directory of Open Access Journals (Sweden)
Andreea – Cristina PETRICĂ
2017-03-01
Full Text Available The aim of this study consists in examining the changes in the volatility of daily returns of EUR/RON exchange rate using on the one hand symmetric GARCH models (ARCH and GARCH and on the other hand the asymmetric GARCH models (EGARCH, TARCH and PARCH, since the conditional variance is time-varying. The analysis takes into account daily quotations of EUR/RON exchange rate over the period of 04th January 1999 to 13th June 2016. Thus, we are modeling heteroscedasticity by applying different specifications of GARCH models followed by looking for significant parameters and low information criteria (minimum Akaike Information Criterion. All models are estimated using the maximum likelihood method under the assumption of several distributions of the innovation terms such as: Normal (Gaussian distribution, Student’s t distribution, Generalized Error distribution (GED, Student’s with fixed df. Distribution, and GED with fixed parameter distribution. The predominant models turned out to be EGARCH and PARCH models, and the empirical results point out that the best model for estimating daily returns of EUR/RON exchange rate is EGARCH(2,1 with Asymmetric order 2 under the assumption of Student’s t distributed innovation terms. This can be explained by the fact that in case of EGARCH model, the restriction regarding the positivity of the conditional variance is automatically satisfied.
Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method
Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam
2017-10-01
Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the
International Nuclear Information System (INIS)
Hristova, R.; Kalchev, B.; Atanasov, D.
2005-01-01
We consider here two basic groups of methods for analysis and assessment of the human factor in the NPP area and give some results from performed analyses as well. The human factor is the human interaction with the design equipment, with the working environment and takes into account the human capabilities and limits. In the frame of the qualitative methods for analysis of the human factor are considered concepts and structural methods for classifying of the information, connected with the human factor. Emphasize is given to the HPES method for human factor analysis in NPP. Methods for quantitative assessment of the human reliability are considered. These methods allow assigning of probabilities to the elements of the already structured information about human performance. This part includes overview of classical methods for human reliability assessment (HRA, THERP), and methods taking into account specific information about human capabilities and limits and about the man-machine interface (CHR, HEART, ATHEANA). Quantitative and qualitative results concerning human factor influence in the initiating events occurrences in the Kozloduy NPP are presented. (authors)
Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J
2014-01-07
An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.
Extending product modeling methods for integrated product development
DEFF Research Database (Denmark)
Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný
2013-01-01
Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
A hierarchy of models for simulating experimental results from a 3D heterogeneous porous medium
Vogler, Daniel; Ostvar, Sassan; Paustian, Rebecca; Wood, Brian D.
2018-04-01
In this work we examine the dispersion of conservative tracers (bromide and fluorescein) in an experimentally-constructed three-dimensional dual-porosity porous medium. The medium is highly heterogeneous (σY2 = 5.7), and consists of spherical, low-hydraulic-conductivity inclusions embedded in a high-hydraulic-conductivity matrix. The bimodal medium was saturated with tracers, and then flushed with tracer-free fluid while the effluent breakthrough curves were measured. The focus for this work is to examine a hierarchy of four models (in the absence of adjustable parameters) with decreasing complexity to assess their ability to accurately represent the measured breakthrough curves. The most information-rich model was (1) a direct numerical simulation of the system in which the geometry, boundary and initial conditions, and medium properties were fully independently characterized experimentally with high fidelity. The reduced-information models included; (2) a simplified numerical model identical to the fully-resolved direct numerical simulation (DNS) model, but using a domain that was one-tenth the size; (3) an upscaled mobile-immobile model that allowed for a time-dependent mass-transfer coefficient; and, (4) an upscaled mobile-immobile model that assumed a space-time constant mass-transfer coefficient. The results illustrated that all four models provided accurate representations of the experimental breakthrough curves as measured by global RMS error. The primary component of error induced in the upscaled models appeared to arise from the neglect of convection within the inclusions. We discuss the necessity to assign value (via a utility function or other similar method) to outcomes if one is to further select from among model options. Interestingly, these results suggested that the conventional convection-dispersion equation, when applied in a way that resolves the heterogeneities, yields models with high fidelity without requiring the imposition of a more
Directory of Open Access Journals (Sweden)
Mikulović Jovan Č.
2014-01-01
Full Text Available A methodology for calculation of overvoltages in transformer windings, based on a numerical method of inverse Laplace transform, is presented. Mathematical model of transformer windings is described by partial differential equations corresponding to distributed parameters electrical circuits. The procedure of calculating overvoltages is applied to windings having either isolated neutral point, or grounded neutral point, or neutral point grounded through impedance. A comparative analysis of the calculation results obtained by the proposed numerical method and by analytical method of calculation of overvoltages in transformer windings is presented. The results computed by the proposed method and measured voltage distributions, when a voltage surge is applied to a three-phase 30 kVA power transformer, are compared. [Projekat Ministartsva nauke Republike Srbije, br. TR-33037 i br. TR-33020
First Results of Modeling Radiation Belt Electron Dynamics with the SAMI3 Plasmasphere Model
Komar, C. M.; Glocer, A.; Huba, J.; Fok, M. C. H.; Kang, S. B.; Buzulukova, N.
2017-12-01
The radiation belts were one of the first discoveries of the Space Age some sixty years ago and radiation belt models have been improving since the discovery of the radiation belts. The plasmasphere is one region that has been critically important to determining the dynamics of radiation belt populations. This region of space plays a critical role in describing the distribution of chorus and magnetospheric hiss waves throughout the inner magnetosphere. Both of these waves have been shown to interact with energetic electrons in the radiation belts and can result in the energization or loss of radiation belt electrons. However, radiation belt models have been historically limited in describing the distribution of cold plasmaspheric plasma and have relied on empirically determined plasmasphere models. Some plasmasphere models use an azimuthally symmetric distribution of the plasmasphere which can fail to capture important plasmaspheric dynamics such as the development of plasmaspheric drainage plumes. Previous work have coupled the kinetic bounce-averaged Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model used to model ring current and radiation belt populations with the Block-adaptive Tree Solar wind Roe-type Upwind Scheme (BATSRUS) global magnetohydrodynamic model to self-consistently obtain the magnetospheric magnetic field and ionospheric potential. The present work will utilize this previous coupling and will additionally couple the SAMI3 plasmasphere model to better represent the dynamics on the plasmasphere and its role in determining the distribution of waves throughout the inner magnetosphere. First results on the relevance of chorus, hiss, and ultralow frequency waves on radiation belt electron dynamics will be discussed in context of the June 1st, 2013 storm-time dropout event.
Architecture oriented modeling and simulation method for combat mission profile
Directory of Open Access Journals (Sweden)
CHEN Xia
2017-05-01
Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.
Experimental results and modeling of a dynamic hohlraum on SATURN
International Nuclear Information System (INIS)
Derzon, M.S.; Allshouse, G.O.; Deeney, C.; Leeper, R.J.; Nash, T.J.; Matuska, W.; Peterson, D.L.; MacFarlane, J.J.; Ryutov, D.D.
1998-06-01
Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations
Phase separated membrane bioreactor - Results from model system studies
Petersen, G. R.; Seshan, P. K.; Dunlop, E. H.
1989-01-01
The operation and evaluation of a bioreactor designed for high intensity oxygen transfer in a microgravity environment is described. The reactor itself consists of a zero headspace liquid phase separated from the air supply by a long length of silicone rubber tubing through which the oxygen diffuses in and the carbon dioxide diffuses out. Mass transfer studies show that the oxygen is film diffusion controlled both externally and internally to the tubing and not by diffusion across the tube walls. Methods of upgrading the design to eliminate these resistances are proposed. Cell growth was obtained in the fermenter using Saccharomyces cerevisiae showing that this concept is capable of sustaining cell growth in the terrestrial simulation.
Phase separated membrane bioreactor: Results from model system studies
Petersen, G. R.; Seshan, P. K.; Dunlop, E. H.
The operation and evaluation of a bioreactor designed for high intensity oxygen transfer in a microgravity environment is described. The reactor itself consists of a zero headspace liquid phase separated from the air supply by a long length of silicone rubber tubing through which the oxygen diffuses in and the carbon dioxide diffuses out. Mass transfer studies show that the oxygen is film diffusion controlled both externally and internally to the tubing and not by diffusion across the tube walls. Methods of upgrading the design to eliminate these resistances are proposed. Cell growth was obtained in the fermenter using Saccharomyces cerevisiae showing that this concept is capable of sustaining cell growth in the terrestial simulation.
Vatcheva, Ivayla; Bernard, Olivier; de Jong, Hidde; Gouze, Jean-Luc; Mars, Nicolaas; Nebel, B.
2001-01-01
Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of
Application of Statistical Methods to Activation Analytical Results near the Limit of Detection
DEFF Research Database (Denmark)
Heydorn, Kaj; Wanscher, B.
1978-01-01
Reporting actual numbers instead of upper limits for analytical results at or below the detection limit may produce reliable data when these numbers are subjected to appropriate statistical processing. Particularly in radiometric methods, such as activation analysis, where individual standard...... deviations of analytical results may be estimated, improved discrimination may be based on the Analysis of Precision. Actual experimental results from a study of the concentrations of arsenic in human skin demonstrate the power of this principle....
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Monte Carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods
Soroush, Masoud; Weinberger, Charles B.
2010-01-01
This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…
Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...
Solving the nuclear shell model with an algebraic method
International Nuclear Information System (INIS)
Feng, D.H.; Pan, X.W.; Guidry, M.
1997-01-01
We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
A copula method for modeling directional dependence of genes
Directory of Open Access Journals (Sweden)
Park Changyi
2008-05-01
Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared
Conversion Method of the Balance Test Results in Open Jet Tunnel on the Free Flow Conditions
Directory of Open Access Journals (Sweden)
V. T. Bui
2015-01-01
Full Text Available The paper considers a problem of sizing a model and converting the balance test results in the low speed open-jet wind tunnel to free-flow conditions. The ANSYS Fluent commercial code performs flow model calculations in the test section and in the free flow, and the ANSYS ICEM CFD module is used to provide grid generation. A structured grid is generated in the free flow and an unstructured one is provided in the test section. The changes of aerodynamic coefficients are determined at the different values of the blockage factor for the segmental-conical and hemisphere cylinder-cone shapes of the model. The blockage factor values are found at which the interference of the test section – model is neglected. The paper presents a technique to convert the wind tunnel test results to the free flow conditions.
How the RNA isolation method can affect microRNA microarray results
DEFF Research Database (Denmark)
Podolska, Agnieszka; Kaczkowski, Bogumil; Litman, Thomas
2011-01-01
RNA microarray analysis on porcine brain tissue. One method is a phenol-guanidine isothiocyanate-based procedure that permits isolation of total RNA. The second method, miRVana™ microRNA isolation, is column based and recovers the small RNA fraction alone. We found that microarray analyses give different results...... that depend on the RNA fraction used, in particular because some microRNAs appear very sensitive to the RNA isolation method. We conclude that precautions need to be taken when comparing microarray studies based on RNA isolated with different methods.......The quality of RNA is crucial in gene expression experiments. RNA degradation interferes in the measurement of gene expression, and in this context, microRNA quantification can lead to an incorrect estimation. In the present study, two different RNA isolation methods were used to perform micro...
Cazorla, Constantin; Morel, Thierry; Nazé, Yaël; Rauw, Gregor; Semaan, Thierry; Daflon, Simone; Oey, M. S.
2017-07-01
Aims: Recent observations have challenged our understanding of rotational mixing in massive stars by revealing a population of fast-rotating objects with apparently normal surface nitrogen abundances. However, several questions have arisen because of a number of issues, which have rendered a reinvestigation necessary; these issues include the presence of numerous upper limits for the nitrogen abundance, unknown multiplicity status, and a mix of stars with different physical properties, such as their mass and evolutionary state, which are known to control the amount of rotational mixing. Methods: We have carefully selected a large sample of bright, fast-rotating early-type stars of our Galaxy (40 objects with spectral types between B0.5 and O4). Their high-quality, high-resolution optical spectra were then analysed with the stellar atmosphere modelling codes DETAIL/SURFACE or CMFGEN, depending on the temperature of the target. Several internal and external checks were performed to validate our methods; notably, we compared our results with literature data for some well-known objects, studied the effect of gravity darkening, or confronted the results provided by the two codes for stars amenable to both analyses. Furthermore, we studied the radial velocities of the stars to assess their binarity. Results: This first part of our study presents our methods and provides the derived stellar parameters, He, CNO abundances, and the multiplicity status of every star of the sample. It is the first time that He and CNO abundances of such a large number of Galactic massive fast rotators are determined in a homogeneous way. Based on observations obtained with the Heidelberg Extended Range Optical Spectrograph (HEROS) at the Telescopio Internacional de Guanajuato (TIGRE) with the SOPHIE échelle spectrograph at the Haute-Provence Observatory (OHP; Institut Pytheas; CNRS, France), and with the Magellan Inamori Kyocera Echelle (MIKE) spectrograph at the Magellan II Clay telescope
A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model
Directory of Open Access Journals (Sweden)
WenBo Xie
2014-01-01
Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.
International Nuclear Information System (INIS)
Bashir, T.
1996-01-01
The introduction of solid phase separation techniques is an important improvement in radioimmunoassays and immunoradiometric assays. Magnetic particle solid phase method has additional advantages over others, as the separation is rapid and centrifugation is not required. Three types of magnetic particles have been studied in T 4 RIA and the results have been compared with commercial kits and other established methods. (author). 4 refs, 9 figs, 2 tabs
Sedukhin, V. V.; Anikeev, A. N.; Chumanov, I. V.
2017-11-01
Method optimizes hardening working layer parts’, working in high-abrasive conditions looks in this work: bland refractory particles WC and TiC in respect of 70/30 wt. % prepared by beforehand is applied on polystyrene model in casting’ mould. After metal poured in mould, withstand for crystallization, and then a study is carried out. Study macro- and microstructure received samples allows to say that thickness and structure received hardened layer depends on duration interactions blend harder carbides and liquid metal. Different character interactions various dispersed particles and matrix metal observed under the same conditions. Tests abrasive wear resistance received materials of method calculating residual masses was conducted in laboratory’ conditions. Results research wear resistance showed about that method obtaining harder coating of blend carbide tungsten and carbide titanium by means of drawing on surface foam polystyrene model before moulding, allows receive details with surface has wear resistance in 2.5 times higher, than details of analogy steel uncoated. Wherein energy costs necessary for transformation units mass’ substances in powder at obtained harder layer in 2.06 times higher, than materials uncoated.
Results of modeling advanced BWR fuel designs using CASMO-4
International Nuclear Information System (INIS)
Knott, D.; Edenius, M.
1996-01-01
Advanced BWR fuel designs from General Electric, Siemens and ABB-Atom have been analyzed using CASMO-4 and compared against fission rate distributions and control rod worths from MCNP. Included in the analysis were fuel storage rack configurations and proposed mixed oxide (MOX) designs. Results are also presented from several cycles of SIMULATE-3 core follow analysis, using nodal data generated by CASMO-4, for cycles in transition from 8x8 designs to advanced fuel designs. (author)
Modeling granular phosphor screens by Monte Carlo methods
International Nuclear Information System (INIS)
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-01-01
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd 2 O 2 S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd 2 O 2 S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd 2 O 2 S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
First results from the International Urban Energy Balance Model Comparison: Model Complexity
Blackett, M.; Grimmond, S.; Best, M.
2009-04-01
A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run