International Nuclear Information System (INIS)
Barton, C.C.; Larsen, E.; Page, W.R.; Howard, T.M.
1993-01-01
Fractures have been characterized for fluid-flow, geomechanical, and paleostress modeling at three localities in the vicinity of drill hole USW G-4 at Yucca Mountain in southwestern Nevada. A method for fracture characterization is introduced that integrates mapping fracture-trace networks and quantifying eight fracture parameters: trace length, orientation, connectivity, aperture, roughness, shear offset, trace-length density, and mineralization. A complex network of fractures was exposed on three 214- to 260-m 2 pavements cleared of debris in the upper lithophysal unit of the Tiva Canyon Member of the Miocene Paint-brush Tuff. The pavements are two-dimensional sections through the three-dimensional network of strata-bound fractures. All fractures with trace lengths greater than 0.2 m were mapped and studied
Energy Technology Data Exchange (ETDEWEB)
Sausse, J.
1998-10-15
the modelization of the space-time evolution of the Brezouard granite crack permeability during fluid-rock interactions. The two used permeability models (geometrical or statistical) remain very dependent on the definition of the characteristic opening of fracture or fissure. Real fractures in a rocky mass are characterised by non parallel, flat and thus overlapped walls. The study of these natural fracture surfaces at micro and macroscopic scale is completed by a theoretical modelization of their hydro-mechanical behaviour. This work indicates the influence of the surface roughness on the fluid flow as well as the propagation of the alteration. These fractures were formed and percolated under a particular tectonic regime that controls their orientation. Numerous quartz veins in the Soultz granite are opened and sealed during the Oligocene extension. The characteristic fluid pressure of these opening - sealing stages are quantified thanks to fluid inclusion studies. These inclusions are located in secondary quartz which seal the veins. A new method of paleo-stress quantification is proposed, based on the knowledge of this fluid pressure. It takes i) the geometrical distribution of the vein poles, ii) some empirical considerations of rupture criteria, and iii) the fluid pressures into account. (author)
Full paleostress tensor reconstruction: case study of the Panasqueira Mine, Portugal.
Pascal, C.; Jaques Ribeiro, L. M.
2017-12-01
Paleostress tensor restoration methods are traditionally limited to reconstructing geometrical parameters and are unable to resolve stress magnitudes. Based on previous studies we further developed a methodology to restore full paleostress tensors. We concentrated on inversion of Mode I fractures and acquired data in Panasqueira Mine, Portugal, where optimal 3D exposures of mineralised quartz veins can be found. To carry out full paleostress restoration we needed to determine (1) pore (paleo)pressure and (2) vein attitudes. To these aims we conducted an extensive fluid inclusion study to derive fluid isochores from the quartz of the studied veins. To further constrain P-T conditions, we combined these isochores with crystallisation temperatures derived from geochemical analyses of coeval arsenopyrite. We also applied the sphalerite geobarometer and considered two other independent pressure indicators. Our results point to pore pressures of 300 MPa and formation depths of 10 km. As a second step, we measured 600 subhorizontal quartz veins in all the levels of the mine. The inversion of the attitudes of the veins allowed for reconstructing the orientations of the principal axes of stress, the unscaled Mohr circle and the relative pore pressure. After merging these results with the previously obtained absolute pore pressure we reconstructed the six parameters of the paleostress tensor.
Jaques, Luís; Pascal, Christophe
2017-09-01
Paleostress tensor restoration methods are traditionally limited to reconstructing geometrical parameters and are unable to resolve stress magnitudes. Based on previous studies we further developed a methodology to restore full paleostress tensors. We concentrated on inversion of Mode I fractures and acquired data in Panasqueira Mine, Portugal, where optimal exposures of mineralized quartz veins can be found. To carry out full paleostress restoration we needed to determine (1) pore (paleo)pressure and (2) vein attitudes. The present contribution focuses specifically on the determination of pore pressure. To these aims we conducted an extensive fluid inclusion study to derive fluid isochores from the quartz of the studied veins. To constrain P-T conditions, we combined these isochores with crystallisation temperatures derived from geochemical analyses of coeval arsenopyrite. We also applied the sphalerite geobarometer and considered two other independent pressure indicators. Our results point to pore pressures of ∼300 MPa and formation depths of ∼10 km. Such formation depths are in good agreement with the regional geological evolution. The obtained pore pressure will be merged with vein inversion results, in order to achieve full paleostress tensor restoration, in a forthcoming companion paper.
Paleostress Analysis with Reflection Seismic Data: Example from the Songliao Basin, Northeast China
Liu, G.; Persaud, P.; Zhang, Y.
2017-12-01
Currently paleostress inversion using fault-slip data is a well established approach. However, the deformation history contained in folds has not yet been fully utilized to determine the paleostress field. By applying a 2D FFT-based algorithm to a structure or isopach map derived from reflection seismic data, we find a new way of exploiting the information preserved in folds to determine the paleostress. Our method requires that the strata have a large areal extent and are well preserved. After pre-processing the maps, we find that in the frequency-wavenumber (F-K) domain, folds with similar strikes are grouped into spectrum belts. Each belt parallels the short axis of the fold group and can therefore indicate the direction of the associated maximum horizontal stress. Some information on the relative chronology of stresses can be deduced by comparing the structure and isopach spectrum maps, e.g., if the structure spectrum map has one more spectrum belt than that of the isopach map (an approximate paleo-structure map of the corresponding stratum), we can conclude that the indicated stress postdated the deposition of the stratum. We selected three Late Cretaceous strata from a 3D seismic survey located in the intracontinental Songliao Basin, northeast China. The Songliao has experienced four episodes of deformation: mantle upwelling, rifting, postrift thermal subsidence and structural inversion (Feng et al., 2009). The selected strata were deposited during the third stage. Three structure and two isopach maps were decomposed in the F-K domain. Spectral analysis of the lower isopach map shows eight paleostress directions. We also identify a ninth paleostress in addition to the previous eight from the structure maps and the upper isopach map. The eight stress directions that exist in both the isopach and structure maps may have been active throughout the time period spanned by the strata. We interpret the ninth paleostress as being active after the deposition of the
Traforti, Anna; Zampieri, Dario; Massironi, Matteo; Viola, Giulio; Alvarado, Patricia; Di Toro, Giulio
2016-04-01
The Eastern Sierras Pampeanas of central Argentina are composed of a series of basement-cored ranges, located in the Andean foreland c. 600 km east of the Andean Cordillera. Although uplift of the ranges is partly attributed to the regional Neogene evolution (Ramos et al. 2002), many questions remain as to the timing and style of deformation. In fact, the Eastern Sierras Pampeanas show compelling evidence of a long lasting brittle history (spanning the Early Carboniferous to Present time), characterised by several deformation events reflecting different tectonic regimes. Each deformation phase resulted in further strain increments accommodated by reactivation of inherited structures and rheological anisotropies (Martino 2003). In the framework of such a polyphase brittle tectonic evolution affecting highly anisotropic basement rocks, the application of paleostress inversion methods, though powerful, suffers from some shortcomings, such as the likely heterogeneous character of fault slip datasets and the possible reactivation of even highly misoriented structures, and thus requires careful analysis. The challenge is to gather sufficient fault-slip data, to develop a proper understanding of the regional evolution. This is done by the identification of internally consistent fault and fracture subsets (associated to distinct stress states on the basis of their geometric and kinematic compatibility) in order to generate a chronologically-constrained evolutionary conceptual model. Based on large fault-slip datasets collected in the Sierras de Cordoba (Eastern Sierras Pampeanas), reduced stress tensors have been generated and interpreted as part of an evolutionary model by considering the obtained results against: (i) existing K-Ar illite ages of fault gouges in the study area (Bense et al. 2013), (ii) the nature and orientation of pre-existing anisotropies and (iii) the present-day stress field due to the convergence of the Nazca and South America plates (main shortening
On the Paleostress Analysis Using Kinematic Indicators Found on an Oriented Core
Czech Academy of Sciences Publication Activity Database
Nováková, Lucie; Brož, Milan
2014-01-01
Roč. 2, č. 2 (2014), s. 76-83 ISSN 2331-9593 R&D Projects: GA MPO(CZ) FR-TI1/367 Institutional support: RVO:67985891 Keywords : paleostress analysis * borehole core * kinematic indicators * bias sampling * recent stress Subject RIV: DC - Siesmology, Volcanology, Earth Structure http://www.hrpub.org/download/20140105/UJG6-13901884.pdf
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Evolution of paleostress fields and brittle deformation in Hronov-Poříčí Fault Zone, Bohemian Massif
Czech Academy of Sciences Publication Activity Database
Nováková, Lucie
2014-01-01
Roč. 58, č. 2 (2014), s. 269-288 ISSN 0039-3169 R&D Projects: GA ČR GA205/09/1244 Institutional support: RVO:67985891 Keywords : paleostress analysis * tectonic history * faulting * active tectonics Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.806, year: 2014
Czech Academy of Sciences Publication Activity Database
Coubal, Miroslav; Málek, Jiří; Adamovič, Jiří; Štěpančíková, Petra
2015-01-01
Roč. 87, July 1 (2015), s. 26-49 ISSN 0264-3707 R&D Projects: GA ČR GAP210/12/0573 Institutional support: RVO:67985831 ; RVO:67985891 Keywords : paleostress * fault kinematics * Lusatian Fault Belt * Elbe fault system * Bohemian Massif * Alpine foreland Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.926, year: 2015
Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie
2015-04-01
Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the
Sasvári, Ágoston; Baharev, Ali
2014-05-01
The aim of this work was to create an open source cross platform application to process brittle structural geological data with seven paleostress inversion algorithms published by different authors and formerly not available within a single desktop application. The tool facilitates separate processing and plotting of different localities, data types and user made groups, using the same single input file. Simplified data input is supported, requiring as small amount of data as possible. Data rotation to correct for bedding tilting, rotation with paleomagnetic declination and k-means clustering are available. RUP and ANG stress estimators calculation and visualization, resolved shear direction display and Mohr circle stress visualization are available. RGB-colored vector graphical outputs are automatically generated in Encapsulated PostScript and Portable Document Format. Stereographical displays on great circle or pole point plot, equal area or equal angle net and upper or lower hemisphere projections are implemented. Rose plots displaying dip direction or strike, with dip angle distribution of the input data set are available. This tool is ideal for preliminary data interpretation on the field (quick processing and visualization in seconds); the implemented methods can be regularly used in the daily academic and industrial work as well. The authors' goal was to create an open source and self-contained desktop application that does not require any additional third party framework (such as .NET) or the Java Virtual Machine. The software has a clear and highly modular structure enabling good code portability, easy maintainability, reusability and extensibility. A Windows installer is publicly available and the program is also fully functional on Linux. The Mac OS X port should be feasible with minimal effort. The install file with test and demo data sets, detailed manual, and links to the GitHub repositories are available on the regularly updated website www.sg2ps.eu.
Minor, Scott A.; Hudson, Mark R.; Caine, Jonathan S.; Thompson, Ren A.
2013-01-01
The structural geometry of transfer and accommodation zones that relay strain between extensional domains in rifted crust has been addressed in many studies over the past 30 years. However, details of the kinematics of deformation and related stress changes within these zones have received relatively little attention. In this study we conduct the first-ever systematic, multi-basin fault-slip measurement campaign within the late Cenozoic Rio Grande rift of northern New Mexico to address the mechanisms and causes of extensional strain transfer associated with a broad accommodation zone. Numerous (562) kinematic measurements were collected at fault exposures within and adjacent to the NE-trending Santo Domingo Basin accommodation zone, or relay, which structurally links the N-trending, right-stepping en echelon Albuquerque and Española rift basins. The following observations are made based on these fault measurements and paleostresses computed from them. (1) Compared to the typical northerly striking normal to normal-oblique faults in the rift basins to the north and south, normal-oblique faults are broadly distributed within two merging, NE-trending zones on the northwest and southeast sides of the Santo Domingo Basin. (2) Faults in these zones have greater dispersion of rake values and fault strikes, greater dextral strike-slip components over a wide northerly strike range, and small to moderate clockwise deflections of their tips. (3) Relative-age relations among fault surfaces and slickenlines used to compute reduced stress tensors suggest that far-field, ~E-W–trending σ3 stress trajectories were perturbed 45° to 90° clockwise into NW to N trends within the Santo Domingo zones. (4) Fault-stratigraphic age relations constrain the stress perturbations to the later stages of rifting, possibly as late as 2.7–1.1 Ma. Our fault observations and previous paleomagnetic evidence of post–2.7 Ma counterclockwise vertical-axis rotations are consistent with increased
Model Correction Factor Method
DEFF Research Database (Denmark)
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
International Nuclear Information System (INIS)
Mahaffy, J.H.; Liles, D.R.; Bott, T.F.
1981-01-01
The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents
Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc
2018-01-01
The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.
Explorative methods in linear models
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2004-01-01
The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....
Models and methods in thermoluminescence
International Nuclear Information System (INIS)
Furetta, C.
2005-01-01
This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)
Models and methods in thermoluminescence
Energy Technology Data Exchange (ETDEWEB)
Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)
2005-07-01
This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)
Multivariate analysis: models and method
International Nuclear Information System (INIS)
Sanz Perucha, J.
1990-01-01
Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
ADOxx Modelling Method Conceptualization Environment
Directory of Open Access Journals (Sweden)
Nesat Efendioglu
2017-04-01
Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.
Diverse methods for integrable models
Fehér, G.
2017-01-01
This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.
Iterative method for Amado's model
International Nuclear Information System (INIS)
Tomio, L.
1980-01-01
A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt
Variational methods in molecular modeling
2017-01-01
This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1993-01-01
This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1991-01-01
Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data
Network modelling methods for FMRI.
Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W
2011-01-15
There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.
Analytical methods used at model facility
International Nuclear Information System (INIS)
Wing, N.S.
1984-01-01
A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy
Energy models: methods and trends
Energy Technology Data Exchange (ETDEWEB)
Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)
1997-12-31
Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.
Energy models: methods and trends
International Nuclear Information System (INIS)
Reuter, A.; Kuehner, R.; Wohlgemuth, N.
1996-01-01
Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Structural modeling techniques by finite element method
International Nuclear Information System (INIS)
Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong
1991-01-01
This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.
Computer-Aided Modelling Methods and Tools
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...
Mechatronic Systems Design Methods, Models, Concepts
Janschek, Klaus
2012-01-01
In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...
Coherence method of identifying signal noise model
International Nuclear Information System (INIS)
Vavrin, J.
1981-01-01
The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)
Model Uncertainty Quantification Methods In Data Assimilation
Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.
2017-12-01
Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.
A Method for Model Checking Feature Interactions
DEFF Research Database (Denmark)
Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter
2015-01-01
This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....
Structural equation modeling methods and applications
Wang, Jichuan
2012-01-01
A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a
Numerical methods and modelling for engineering
Khoury, Richard
2016-01-01
This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...
International Nuclear Information System (INIS)
Shin, Seung Ki; Seong, Poong Hyun
2008-01-01
Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Cache memory modelling method and system
Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis
2011-01-01
The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...
A survey of real face modeling methods
Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng
2017-09-01
The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Global Optimization Ensemble Model for Classification Methods
Directory of Open Access Journals (Sweden)
Hina Anwar
2014-01-01
Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.
Modelling methods for milk intake measurements
International Nuclear Information System (INIS)
Coward, W.A.
1999-01-01
One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used
Diffuse interface methods for multiphase flow modeling
International Nuclear Information System (INIS)
Jamet, D.
2004-01-01
Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one
Model-Based Method for Sensor Validation
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
Developing a TQM quality management method model
Zhang, Zhihai
1997-01-01
From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This
Acceleration methods and models in Sn calculations
International Nuclear Information System (INIS)
Sbaffoni, M.M.; Abbate, M.J.
1984-01-01
In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es
Alternative methods of modeling wind generation using production costing models
International Nuclear Information System (INIS)
Milligan, M.R.; Pang, C.K.
1996-08-01
This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models
Mathematical methods and models in composites
Mantic, Vladislav
2014-01-01
This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover
Intelligent structural optimization: Concept, Model and Methods
International Nuclear Information System (INIS)
Lu, Dagang; Wang, Guangyuan; Peng, Zhang
2002-01-01
Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented
Electromagnetic modeling method for eddy current signal analysis
International Nuclear Information System (INIS)
Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.
2004-10-01
An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs
Mathematical Models and Methods for Living Systems
Chaplain, Mark; Pugliese, Andrea
2016-01-01
The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.
New method dynamically models hydrocarbon fractionation
Energy Technology Data Exchange (ETDEWEB)
Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)
1995-10-01
A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.
Method of generating a computer readable model
DEFF Research Database (Denmark)
2008-01-01
A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...
Engineering design of systems models and methods
Buede, Dennis M
2009-01-01
The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...
Railway Track Allocation: Models and Methods
DEFF Research Database (Denmark)
Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias
2011-01-01
Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a conflict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...
Railway Track Allocation: Models and Methods
DEFF Research Database (Denmark)
Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias
Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...
ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING
Directory of Open Access Journals (Sweden)
Brînduşa-Antonela SBÎRCEA
2011-01-01
Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.
Boundary element method for modelling creep behaviour
International Nuclear Information System (INIS)
Zarina Masood; Shah Nor Basri; Abdel Majid Hamouda; Prithvi Raj Arora
2002-01-01
A two dimensional initial strain direct boundary element method is proposed to numerically model the creep behaviour. The boundary of the body is discretized into quadratic element and the domain into quadratic quadrilaterals. The variables are also assumed to have a quadratic variation over the elements. The boundary integral equation is solved for each boundary node and assembled into a matrix. This matrix is solved by Gauss elimination with partial pivoting to obtain the variables on the boundary and in the interior. Due to the time-dependent nature of creep, the solution has to be derived over increments of time. Automatic time incrementation technique and backward Euler method for updating the variables are implemented to assure stability and accuracy of results. A flowchart of the solution strategy is also presented. (Author)
Surface physics theoretical models and experimental methods
Mamonova, Marina V; Prudnikova, I A
2016-01-01
The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...
Experimental modeling methods in Industrial Engineering
Directory of Open Access Journals (Sweden)
Peter Trebuňa
2009-03-01
Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".
Mechanics, Models and Methods in Civil Engineering
Maceri, Franco
2012-01-01
„Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.
The forward tracking, an optical model method
Benayoun, M
2002-01-01
This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.
Statistical Models and Methods for Lifetime Data
Lawless, Jerald F
2011-01-01
Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,
International Nuclear Information System (INIS)
Park, Inseok; Grandhi, Ramana V.
2014-01-01
Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process
Effect of defuzzification method of fuzzy modeling
Lapohos, Tibor; Buchal, Ralph O.
1994-10-01
Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Mathematical models and methods for planet Earth
Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta
2014-01-01
In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.
Gait variability: methods, modeling and meaning
Directory of Open Access Journals (Sweden)
Hausdorff Jeffrey M
2005-07-01
Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.
FDTD method and models in optical education
Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe
2017-08-01
In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.
Free wake models for vortex methods
Energy Technology Data Exchange (ETDEWEB)
Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)
1997-08-01
The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)
Model reduction methods for vector autoregressive processes
Brüggemann, Ralf
2004-01-01
1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo sitions, have been developed over the years. The econometrics of VAR models and related quantities i...
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Dynamic spatial panels : models, methods, and inferences
Elhorst, J. Paul
This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent
Methods of Medical Guidelines Modelling in GLIF.
Czech Academy of Sciences Publication Activity Database
Buchtela, David; Anger, Z.; Peleška, Jan (ed.); Tomečková, Marie; Veselý, Arnošt; Zvárová, Jana
2005-01-01
Roč. 11, - (2005), s. 1529-1532 ISSN 1727-1983. [EMBEC'05. European Medical and Biomedical Conference /3./. Prague, 20.11.2005-25.11.2005] Institutional research plan: CEZ:AV0Z10300504 Keywords : medical guidelines * knowledge modelling * GLIF model Subject RIV: BD - Theory of Information
Fluid Methods for Modeling Large, Heterogeneous Networks
National Research Council Canada - National Science Library
Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal
2005-01-01
.... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...
Combining static and dynamic modelling methods: a comparison of four methods
Wieringa, Roelf J.
1995-01-01
A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current
A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods
Directory of Open Access Journals (Sweden)
Michael Amberg
1996-11-01
Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.
Accurate Electromagnetic Modeling Methods for Integrated Circuits
Sheng, Z.
2010-01-01
The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on
Reduced Order Modeling Methods for Turbomachinery Design
2009-03-01
and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En
Introduction to mathematical models and methods
Energy Technology Data Exchange (ETDEWEB)
Siddiqi, A. H.; Manchanda, P. [Gautam Budha University, Gautam Budh Nagar-201310 (India); Department of Mathematics, Guru Nanak Dev University, Amritsar (India)
2012-07-17
Some well known mathematical models in the form of partial differential equations representing real world systems are introduced along with fundamental concepts of Image Processing. Notions such as seismic texture, seismic attributes, core data, well logging, seismic tomography and reservoirs simulation are discussed.
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Modeling Storm Surges Using Discontinuous Galerkin Methods
2016-06-01
layer non-reflecting boundary condition (NRBC) on the right wall of the model. A NRBC is when an artificial boundary , B, is created, which truncates the... applications ,” Journal of Computational Physics, 2004. [30] P. L. Butzer and R. Weis, “On the lax equivalence theorem equipped with orders,” Journal of...closer to the shoreline. In our simulation, we also learned of the effects spurious waves can have on the results. Due to boundary conditions, a
A Versatile Nonlinear Method for Predictive Modeling
Liou, Meng-Sing; Yao, Weigang
2015-01-01
As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.
Diffusion in condensed matter methods, materials, models
Kärger, Jörg
2005-01-01
Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.
Continual integration method in the polaron model
International Nuclear Information System (INIS)
Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.
1981-01-01
The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru
Modeling conflict : research methods, quantitative modeling, and lessons learned.
Energy Technology Data Exchange (ETDEWEB)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
Femenias, O.; Diot, H.; Berza, T.; Gauffriau, A.; Demaiffe, D.
2003-04-01
The fabric of crystals in a dyke is representative of the flow of magma, considered as a newtonian fluid. The AMS of the rocks (=magnetic mineralogy subfabric) gives a good representation of the shape preferred orientation related to the total fabric which, in turn is marker of the magmatic flow acquired during emplacement of the fluid within the dyke width. Generally, a symmetrical distribution of the fabric in terms of foliation and lineation across the dyke is in agreement with a model involving symmetrical differential displacements of the flow of the fluid within a channel. In this case, the flow direction is in relation with the imbrication of the symmetric foliations. In this study, we present the cases of both symmetrical and asymmetrical dyke fabric recording and involving different process of emplacement during a regional deformation. From a regional survey of a large Pan-African calc-alkaline dyke swarm (of basaltic-andesitic-dacitic-rhyolitic composition) of the Alpine Danubian window from South Carpathians of Romania, two populations of dykes have been described: thick (from 1 to 30 meters) N-S trending dykes and thin (less than 1 meter) E-W dykes. These two populations crosscut the country rocks without simple chronological relations between them. The thick dykes display asymmetrical fabric that involve a relatively long history of emplacement and important distance of flow. They record the regional sinistral movement of the walls. By contrast, the thin dykes are symmetrical and display frequently an arteritic morphology that limits the dyke length, with no cartographic extension. The mean orientations of the two types of dykes can be related to the same regional stress field and a continuum of emplacement is proposed for the two types of dykes during the regional deformation.
"Method, system and storage medium for generating virtual brick models"
DEFF Research Database (Denmark)
2009-01-01
An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...
A Systematic Identification Method for Thermodynamic Property Modelling
DEFF Research Database (Denmark)
Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent
2017-01-01
In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...
Laser filamentation mathematical methods and models
Lorin, Emmanuel; Moloney, Jerome
2016-01-01
This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...
Models and methods of emotional concordance.
Hollenstein, Tom; Lanteigne, Dianna
2014-04-01
Theories of emotion generally posit the synchronized, coordinated, and/or emergent combination of psychophysiological, cognitive, and behavioral components of the emotion system--emotional concordance--as a functional definition of emotion. However, the empirical support for this claim has been weak or inconsistent. As an introduction to this special issue on emotional concordance, we consider three domains of explanations as to why this theory-data gap might exist. First, theory may need to be revised to more accurately reflect past research. Second, there may be moderating factors such as emotion regulation, context, or individual differences that have obscured concordance. Finally, the methods typically used to test theory may be inadequate. In particular, we review a variety of potential issues: intensity of emotions elicited in the laboratory, nonlinearity, between- versus within-subject associations, the relative timing of components, bivariate versus multivariate approaches, and diversity of physiological processes. Copyright © 2013 Elsevier B.V. All rights reserved.
Theoretical methods and models for mechanical properties of soft biomaterials
Directory of Open Access Journals (Sweden)
Zhonggang Feng
2017-06-01
Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.
METHODICAL MODEL FOR TEACHING BASIC SKI TURN
Directory of Open Access Journals (Sweden)
Danijela Kuna
2013-07-01
Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling
Wilson, William; Atkinson, Gary
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.
Modeling shallow water flows using the discontinuous Galerkin method
Khan, Abdul A
2014-01-01
Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Energy Technology Data Exchange (ETDEWEB)
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
On Angular Sampling Methods for 3-D Spatial Channel Models
DEFF Research Database (Denmark)
Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum
2015-01-01
This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS
Directory of Open Access Journals (Sweden)
Tetiana D. Panchenko
2016-07-01
Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.
Comparative analysis of various methods for modelling permanent magnet machines
Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.
2017-01-01
In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air
Advanced methods of solid oxide fuel cell modeling
Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi
2011-01-01
Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods
Extending product modeling methods for integrated product development
DEFF Research Database (Denmark)
Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný
2013-01-01
Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Architecture oriented modeling and simulation method for combat mission profile
Directory of Open Access Journals (Sweden)
CHEN Xia
2017-05-01
Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.
Modelling a coal subcrop using the impedance method
Energy Technology Data Exchange (ETDEWEB)
Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering
2000-07-01
An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Monte Carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods
Soroush, Masoud; Weinberger, Charles B.
2010-01-01
This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…
An automatic and effective parameter optimization method for model tuning
Directory of Open Access Journals (Sweden)
T. Zhang
2015-11-01
simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...
Solving the nuclear shell model with an algebraic method
International Nuclear Information System (INIS)
Feng, D.H.; Pan, X.W.; Guidry, M.
1997-01-01
We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
Design of nuclear power generation plants adopting model engineering method
International Nuclear Information System (INIS)
Waki, Masato
1983-01-01
The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)
Method of modeling the cognitive radio using Opnet Modeler
Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.
2012-01-01
This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...
A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD
Hof, John G.; Loomis, John B.
1983-01-01
A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...
Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence
Hutter, Kolumban
2004-01-01
The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.
Numerical methods for modeling photonic-crystal VCSELs
DEFF Research Database (Denmark)
Dems, Maciej; Chung, Il-Sug; Nyakas, Peter
2010-01-01
We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....
A Model-Driven Development Method for Management Information Systems
Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki
Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.
Extension of local front reconstruction method with controlled coalescence model
Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.
2018-02-01
The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.
Akgün, Levent
2015-01-01
The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…
A Comparison of Surface Acoustic Wave Modeling Methods
Wilson, W. c.; Atkinson, G. M.
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.
Object Oriented Modeling : A method for combining model and software development
Van Lelyveld, W.
2010-01-01
When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to
Method for modeling social care processes for national information exchange.
Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit
2012-01-01
Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.
[A new method of fabricating photoelastic model by rapid prototyping].
Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei
2011-10-01
To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.
Stencil method: a Markov model for transport in porous media
Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.
2016-12-01
In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.
SmartShadow models and methods for pervasive computing
Wu, Zhaohui
2013-01-01
SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced. The book can serve as a valuable reference work for resea
A numerical method for a transient two-fluid model
International Nuclear Information System (INIS)
Le Coq, G.; Libmann, M.
1978-01-01
The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR
Physical Model Method for Seismic Study of Concrete Dams
Directory of Open Access Journals (Sweden)
Bogdan Roşca
2008-01-01
Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.
A simple flow-concentration modelling method for integrating water ...
African Journals Online (AJOL)
A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...
Comparison of surrogate models with different methods in ...
Indian Academy of Sciences (India)
In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.
Method and apparatus for modeling, visualization and analysis of materials
Aboulhassan, Amal; Hadwiger, Markus
2016-01-01
processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Directory of Open Access Journals (Sweden)
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
3D Face modeling using the multi-deformable method.
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-09-25
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.
Thermal Efficiency Degradation Diagnosis Method Using Regression Model
International Nuclear Information System (INIS)
Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol
2011-01-01
This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant
Dynamic model based on Bayesian method for energy security assessment
International Nuclear Information System (INIS)
Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga
2015-01-01
Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method
Two updating methods for dissipative models with non symmetric matrices
International Nuclear Information System (INIS)
Billet, L.; Moine, P.; Aubry, D.
1997-01-01
In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)
A sediment graph model based on SCS-CN method
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
Automated Model Fit Method for Diesel Engine Control Development
Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.
2014-01-01
This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is
Fuzzy Clustering Methods and their Application to Fuzzy Modeling
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1999-01-01
Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate....... An illustrative synthetic example is analyzed, and prediction accuracy measures are compared between the different variants...
Automated model fit method for diesel engine control development
Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.
2014-01-01
This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is
Attitude Research in Science Education: Contemporary Models and Methods.
Crawley, Frank E.; Kobala, Thomas R., Jr.
1994-01-01
Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…
Approximating methods for intractable probabilistic models: Applications in neuroscience
DEFF Research Database (Denmark)
Højen-Sørensen, Pedro
2002-01-01
This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
Methods for teaching geometric modelling and computer graphics
Energy Technology Data Exchange (ETDEWEB)
Rotkov, S.I.; Faitel`son, Yu. Ts.
1992-05-01
This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.
Vortex Tube Modeling Using the System Identification Method
Energy Technology Data Exchange (ETDEWEB)
Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)
2017-05-15
In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.
Large-signal modeling method for power FETs and diodes
Energy Technology Data Exchange (ETDEWEB)
Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)
2009-06-01
Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.
Large-signal modeling method for power FETs and diodes
International Nuclear Information System (INIS)
Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping
2009-01-01
Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao
2017-01-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...
Optimization Models and Methods Developed at the Energy Systems Institute
N.I. Voropai; V.I. Zorkaltsev
2013-01-01
The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...
Modelling of Airship Flight Mechanics by the Projection Equivalent Method
Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte
2015-01-01
This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...
A discontinuous Galerkin method on kinetic flocking models
Tan, Changhui
2014-01-01
We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
A method for model identification and parameter estimation
International Nuclear Information System (INIS)
Bambach, M; Heinkenschloss, M; Herty, M
2013-01-01
We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Modelling viscoacoustic wave propagation with the lattice Boltzmann method.
Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen
2017-08-31
In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Generalized framework for context-specific metabolic model extraction methods
Directory of Open Access Journals (Sweden)
Semidán eRobaina Estévez
2014-09-01
Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Method and apparatus for modeling, visualization and analysis of materials
Aboulhassan, Amal
2016-08-25
A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.
Model based methods and tools for process systems engineering
DEFF Research Database (Denmark)
Gani, Rafiqul
need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Applied systems ecology: models, data, and statistical methods
Energy Technology Data Exchange (ETDEWEB)
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Methods improvements incorporated into the SAPHIRE ASP models
International Nuclear Information System (INIS)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
1995-01-01
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements
Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models
Marquette, Michele L.; Sognier, Marguerite A.
2013-01-01
An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
A Pansharpening Method Based on HCT and Joint Sparse Model
Directory of Open Access Journals (Sweden)
XU Ning
2016-04-01
Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.
Continuum-Kinetic Models and Numerical Methods for Multiphase Applications
Nault, Isaac Michael
This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.
Statistical learning modeling method for space debris photometric measurement
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
2016-03-01
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
Methods of mathematical modelling continuous systems and differential equations
Witelski, Thomas
2015-01-01
This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
Discrete gradient methods for solving variational image regularisation models
International Nuclear Information System (INIS)
Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B
2017-01-01
Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)
A meshless method for modeling convective heat transfer
Energy Technology Data Exchange (ETDEWEB)
Carrington, David B [Los Alamos National Laboratory
2010-01-01
A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Evaluation process radiological in ternopil region method of box models
Directory of Open Access Journals (Sweden)
І.В. Матвєєва
2006-02-01
Full Text Available Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.
The Langevin method and Hubbard-like models
International Nuclear Information System (INIS)
Gross, M.; Hamber, H.
1989-01-01
The authors reexamine the difficulties associated with application of the Langevin method to numerical simulation of models with non-positive definite statistical weights, including the Hubbard model. They show how to avoid the violent crossing of the zeroes of the weight and how to move those nodes away from the real axis. However, it still appears necessary to keep track of the sign (or phase) of the weight
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
An alternative method for centrifugal compressor loading factor modelling
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Annular dispersed flow analysis model by Lagrangian method and liquid film cell method
International Nuclear Information System (INIS)
Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.
2003-01-01
A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Methods and models used in comparative risk studies
International Nuclear Information System (INIS)
Devooght, J.
1983-01-01
Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr
Toric Lego: A method for modular model building
Balasubramanian, Vijay; García-Etxebarria, Iñaki
2010-01-01
Within the context of local type IIB models arising from branes at toric Calabi-Yau singularities, we present a systematic way of joining any number of desired sectors into a consistent theory. The different sectors interact via massive messengers with masses controlled by tunable parameters. We apply this method to a toy model of the minimal supersymmetric standard model (MSSM) interacting via gauge mediation with a metastable supersymmetry breaking sector and an interacting dark matter sector. We discuss how a mirror procedure can be applied in the type IIA case, allowing us to join certain intersecting brane configurations through massive mediators.
Modelling across bioreactor scales: methods, challenges and limitations
DEFF Research Database (Denmark)
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Novel extrapolation method in the Monte Carlo shell model
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.
Moments Method for Shell-Model Level Density
International Nuclear Information System (INIS)
Zelevinsky, V; Horoi, M; Sen'kov, R A
2016-01-01
The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)
Methods improvements incorporated into the SAPHIRE ASP models
International Nuclear Information System (INIS)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.
1994-01-01
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements
Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...
Modelling of Granular Materials Using the Discrete Element Method
DEFF Research Database (Denmark)
Ullidtz, Per
1997-01-01
With the Discrete Element Method it is possible to model materials that consists of individual particles where a particle may role or slide on other particles. This is interesting because most of the deformation in granular materials is due to rolling or sliding rather that compression of the gra...
Moderation instead of modelling: some arguments against formal engineering methods
Rauterberg, G.W.M.; Sikorski, M.; Rauterberg, G.W.M.
1998-01-01
The more formal the used engineering techniques are, the less non-technical facts can be captured. Several business process reengineering and software development projects fail, because the project management concentrates to much on formal methods and modelling approaches. A successful change of
The research methods and model of protein turnover in animal
International Nuclear Information System (INIS)
Wu Xilin; Yang Feng
2002-01-01
The author discussed the concept and research methods of protein turnover in animal body. The existing problems and the research results of animal protein turnover in recent years were presented. Meanwhile, the measures to improve the models of animal protein turnover were analyzed
Methods and models for the construction of weakly parallel tests
Adema, J.J.; Adema, Jos J.
1992-01-01
Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are
Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.
Beck, Kirk A.
2005-01-01
This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…
Heat bath method for the twisted Eguchi-Kawai model
International Nuclear Information System (INIS)
Fabricius, K.; Haan, O.
1984-01-01
We reformulate the twisted Eguchi-Kawaii model in a way that allows us to use the heat bath method for the updating procedure of the link matrices. This new formulation is more efficient by a factor of 2.5 in computer time and of 2.3 in memory need. (orig.)
Heat bath method for the twisted Eguchi-Kawai model
Energy Technology Data Exchange (ETDEWEB)
Fabricius, K.; Haan, O.
1984-08-16
We reformulate the twisted Eguchi-Kawaii model in a way that allows us to use the heat bath method for the updating procedure of the link matrices. This new formulation is more efficient by a factor of 2.5 in computer time and of 2.3 in memory need.
Methods and models for the construction of weakly parallel tests
Adema, J.J.; Adema, Jos J.
1990-01-01
Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information
Arctic curves in path models from the tangent method
Di Francesco, Philippe; Lapa, Matthew F.
2018-04-01
Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.
Application of the simplex method of linear programming model to ...
African Journals Online (AJOL)
This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...
Accident Analysis Methods and Models — a Systematic Literature Review
Wienen, Hans Christian Augustijn; Bukhsh, Faiza Allah; Vriezekolk, E.; Wieringa, Roelf J.
2017-01-01
As part of our co-operation with the Telecommunication Agency of the Netherlands, we want to formulate an accident analysis method and model for use in incidents in telecommunications that cause service unavailability. In order to not re-invent the wheel, we wanted to first get an overview of all
Modelling of Airship Flight Mechanics by the Projection Equivalent Method
Directory of Open Access Journals (Sweden)
Frantisek Jelenciak
2015-12-01
Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.
Method for modeling post-mortem biometric 3D fingerprints
Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.
2016-05-01
Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.
Computational Methods for Modeling Aptamers and Designing Riboswitches
Directory of Open Access Journals (Sweden)
Sha Gong
2017-11-01
Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.
Review: Optimization methods for groundwater modeling and management
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Acoustic 3D modeling by the method of integral equations
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
An efficient method for model refinement in diffuse optical tomography
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
A new method to determine the number of experimental data using statistical modeling methods
Energy Technology Data Exchange (ETDEWEB)
Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)
2017-06-15
For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.
Models and methods for hot spot safety work
DEFF Research Database (Denmark)
Vistisen, Dorte
2002-01-01
Despite the fact that millions DKK each year are spent on improving roadsafety in Denmark, funds for traffic safety are limited. It is therefore vital to spend the resources as effectively as possible. This thesis is concerned with the area of traffic safety denoted "hot spot safety work", which...... is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time...
a Modeling Method of Fluttering Leaves Based on Point Cloud
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Computational mathematics models, methods, and analysis with Matlab and MPI
White, Robert E
2004-01-01
Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...
Model of coupling with core in the Green function method
International Nuclear Information System (INIS)
Kamerdzhiev, S.P.; Tselyaev, V.I.
1983-01-01
Models of coupling with core in the method of the Green functions, presenting generalization of conventional method of chaotic phases, i.e. account of configurations of more complex than monoparticle-monohole (1p1h) configurations, have been considered. Odd nuclei are studied only to the extent when the task of odd nucleus is solved for even-even nucleus. Microscopic model of the account of delay effects in mass operator M=M(epsilon), which corresponds to the account of the effects influence only on the change of quasiparticle behaviour in magic nucleus as compared with their behaviour, described by pure model of cores, has been considered. The change results in fragmentation of monoparticle levels, which is the main effect, and in the necessity to use new basis as compared with the shell one, corresponding to inoculative quasiparticles. When formulas have been devived concrete type of mass operator M(epsilon) is not used
Developing energy forecasting model using hybrid artificial intelligence method
Institute of Scientific and Technical Information of China (English)
Shahram Mollaiy-Berneti
2015-01-01
An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.
Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics
Directory of Open Access Journals (Sweden)
Fernando Guilherme Silvano Lobo Pimentel
2011-06-01
Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.
Dynamic modeling method for infrared smoke based on enhanced discrete phase model
Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo
2018-03-01
The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Directory of Open Access Journals (Sweden)
Jure Tuta
2018-03-01
Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
Model parameterization as method for data analysis in dendroecology
Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita
2017-04-01
There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)
Modeling of radionuclide migration through porous material with meshless method
International Nuclear Information System (INIS)
Vrankar, L.; Turk, G.; Runovc, F.
2005-01-01
To assess the long term safety of a radioactive waste disposal system, mathematical models are used to describe groundwater flow, chemistry and potential radionuclide migration through geological formations. A number of processes need to be considered when predicting the movement of radionuclides through the geosphere. The most important input data are obtained from field measurements, which are not completely available for all regions of interest. For example, the hydraulic conductivity as an input parameter varies from place to place. In such cases geostatistical science offers a variety of spatial estimation procedures. Methods for solving the solute transport equation can also be classified as Eulerian, Lagrangian and mixed. The numerical solution of partial differential equations (PDE) is usually obtained by finite difference methods (FDM), finite element methods (FEM), or finite volume methods (FVM). Kansa introduced the concept of solving partial differential equations using radial basis functions (RBF) for hyperbolic, parabolic and elliptic PDEs. Our goal was to present a relatively new approach to the modelling of radionuclide migration through the geosphere using radial basis function methods in Eulerian and Lagrangian coordinates. Radionuclide concentrations will also be calculated in heterogeneous and partly heterogeneous 2D porous media. We compared the meshless method with the traditional finite difference scheme. (author)
CAD-based automatic modeling method for Geant4 geometry model through MCAM
International Nuclear Information System (INIS)
Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.
2013-01-01
The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)
Evaluation of internal noise methods for Hotelling observer models
International Nuclear Information System (INIS)
Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.
2007-01-01
The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality
A Review of Distributed Parameter Groundwater Management Modeling Methods
Gorelick, Steven M.
1983-04-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
Storm surge model based on variational data assimilation method
Directory of Open Access Journals (Sweden)
Shi-li Huang
2010-06-01
Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.
Coarse Analysis of Microscopic Models using Equation-Free Methods
DEFF Research Database (Denmark)
Marschler, Christian
of these models might be high-dimensional, the properties of interest are usually macroscopic and lowdimensional in nature. Examples are numerous and not necessarily restricted to computer models. For instance, the power output, energy consumption and temperature of engines are interesting quantities....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...... in these models (learning outcome, traffic jam density, oscillation period), but also allow to investigate unstable solutions, which are important information to determine basins of attraction of stable solutions and thereby reveal information on the long-term behavior of an initial state....
Numerical methods for the Lévy LIBOR model
DEFF Research Database (Denmark)
Papapantoleon, Antonis; Skovmand, David
2010-01-01
but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...
Numerical Methods for the Lévy LIBOR Model
DEFF Research Database (Denmark)
Papapantoleon, Antonis; Skovmand, David
are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Soybean yield modeling using bootstrap methods for small samples
Energy Technology Data Exchange (ETDEWEB)
Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.
2016-11-01
One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)
A hierarchical network modeling method for railway tunnels safety assessment
Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin
2017-02-01
Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.
A Kriging Model Based Finite Element Model Updating Method for Damage Detection
Directory of Open Access Journals (Sweden)
Xiuming Yang
2017-10-01
Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.
Character expansion methods for matrix models of dually weighted graphs
International Nuclear Information System (INIS)
Kazakov, V.A.; Staudacher, M.; Wynter, T.
1996-01-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphs possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problem of phase transitions from random to flat lattices. (orig.). With 4 figs
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
A Method to Identify Flight Obstacles on Digital Surface Model
Institute of Scientific and Technical Information of China (English)
ZHAO Min; LIN Xinggang; SUN Shouyu; WANG Youzhi
2005-01-01
In modern low-altitude terrain-following guidance, a constructing method of the digital surface model (DSM) is presented in the paper to reduce the threat to flying vehicles of tall surface features for safe flight. The relationship between an isolated obstacle size and the intervals of vertical- and cross-section in the DSM model is established. The definition and classification of isolated obstacles are proposed, and a method for determining such isolated obstacles in the DSM model is given. The simulation of a typical urban district shows that when the vertical- and cross-section DSM intervals are between 3 m and 25 m, the threat to terrain-following flight at low-altitude is reduced greatly, and the amount of data required by the DSM model for monitoring in real time a flying vehicle is also smaller. Experiments show that the optimal results are for an interval of 12.5 m in the vertical- and cross-sections in the DSM model, with a 1:10 000 DSM scale grade.
Impacts modeling using the SPH particulate method. Case study
International Nuclear Information System (INIS)
Debord, R.
1999-01-01
The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)
A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series
Directory of Open Access Journals (Sweden)
Fernando Luiz Cyrino Oliveira
2014-01-01
Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.
Modeling Music Emotion Judgments Using Machine Learning Methods
Directory of Open Access Journals (Sweden)
Naresh N. Vempala
2018-01-01
Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.
Finite-element method modeling of hyper-frequency structures
International Nuclear Information System (INIS)
Zhang, Min
1990-01-01
The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr
New Models and Methods for the Electroweak Scale
Energy Technology Data Exchange (ETDEWEB)
Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac
Modeling of Methods to Control Heat-Consumption Efficiency
Tsynaeva, E. A.; Tsynaeva, A. A.
2016-11-01
In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.
Modeling of electromigration salt removal methods in building materials
DEFF Research Database (Denmark)
Johannesson, Björn; Ottosen, Lisbeth M.
2008-01-01
for salt attack of various kinds, is one potential method to preserve old building envelopes. By establishing a model for ionic multi-species diffusion, which also accounts for external applied electrical fields, it is proposed that an important complement to the experimental tests and that verification...... with its ionic mobility properties. It is, further, assumed that Gauss’s law can be used to calculate the internal electrical field induced by the diffusion it self. In this manner the external electrical field applied can be modeled, simply, by assigning proper boundary conditions for the equation...
(Environmental and geophysical modeling, fracture mechanics, and boundary element methods)
Energy Technology Data Exchange (ETDEWEB)
Gray, L.J.
1990-11-09
Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Stress description model by non destructive magnetic methods
International Nuclear Information System (INIS)
Flambard, C.; Grossiord, J.L.; Tourrenc, P.
1983-01-01
Since a few years, CETIM investigates analysis possibilities of materials, by developing a method founded on observation of ferromagnetic noise. By experiments, correlations have become obvious between state of the material and recorded signal. These correlations open to industrial applications to measure stresses and strains in elastic and plastic ranges. This article starts with a brief historical account and theoretical backgrounds of the method. The experimental frame of this research is described, and the main results are analyzed. Theoretically, a model was built up, and we present it. It seems in agreement with some experimental observations. The main results concerning stress application, thermal and surface treatments (decarbonizing) are presented [fr
Energy Technology Data Exchange (ETDEWEB)
Milligan, M R
1996-04-01
As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.
Modeling Enzymatic Transition States by Force Field Methods
DEFF Research Database (Denmark)
Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank
2009-01-01
The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... of the TS geometry on the flexibility of the system has been probed by fixing layers of atoms around the active site and using increasingly larger nonbonded cutoffs. The variability over the 20 structures is found to decrease as the system is made more flexible. Relative energies have been calculated...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...
Optimization Method of Fusing Model Tree into Partial Least Squares
Directory of Open Access Journals (Sweden)
Yu Fang
2017-01-01
Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.
A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model
Directory of Open Access Journals (Sweden)
Chi-Sann Liou
2009-01-01
Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.
Linear facility location in three dimensions - Models and solution methods
DEFF Research Database (Denmark)
Brimberg, Jack; Juel, Henrik; Schöbel, Anita
2002-01-01
We consider the problem of locating a line or a line segment in three-dimensional space, such that the sum of distances from the facility represented by the line (segment) to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through...... horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and efficient solution methods are given....
Chebyshev super spectral viscosity method for a fluidized bed model
International Nuclear Information System (INIS)
Sarra, Scott A.
2003-01-01
A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations
A model based security testing method for protocol implementation.
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.
Semi-Lagrangian methods in air pollution models
Directory of Open Access Journals (Sweden)
A. B. Hansen
2011-06-01
Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.
The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.
Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.
All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.
The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.
The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme
Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.
Directory of Open Access Journals (Sweden)
Shankarjee Krishnamoorthi
Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.
Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.
Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan
2014-01-01
We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.
Arima model and exponential smoothing method: A comparison
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Statistical methods for mechanistic model validation: Salt Repository Project
International Nuclear Information System (INIS)
Eggett, D.L.
1988-07-01
As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs
Modern Methods for Modeling Change in Obesity Research in Nursing.
Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E
2017-08-01
Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.
The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method
Directory of Open Access Journals (Sweden)
Dewei Zhang
2014-01-01
Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.
Modelling magnetic polarisation J 50 by different methods
International Nuclear Information System (INIS)
Yonamine, Taeko; Campos, Marcos F. de; Castro, Nicolau A.; Landgraf, Fernando J.G.
2006-01-01
Two different methods for modelling the angular behaviour of magnetic polarisation at 5000 A/m (J 50 ) of electrical steels were evaluated and compared. Both methods are based upon crystallographic texture data. The texture of non-oriented electrical steels with silicon content ranging from 0.11 to 3%Si was determined by X-ray diffraction. In the first method, J 50 was correlated to the calculated value of the average anisotropy energy in each direction, using texture data. In the second method, the first three coefficients of the spherical harmonic series of the ODF and two experimental points were used to estimate the angular variation of J 50 . The first method allows the estimation of J 50 for samples with different textures and Si contents using only the texture data, with no need of magnetic measurement, and this is advantageous, because texture data can be acquired with less than 2 g of material. The second method may give better adjust in some situations but besides the texture data, it requests magnetic measurements in at least two directions, for example, rolling and transverse directions
Thermal Modeling Method Improvements for SAGE III on ISS
Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn
2015-01-01
The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was
Hybrid Modeling Method for a DEP Based Particle Manipulation
Directory of Open Access Journals (Sweden)
Mohamad Sawan
2013-01-01
Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Nuclear-fuel-cycle optimization: methods and modelling techniques
International Nuclear Information System (INIS)
Silvennoinen, P.
1982-01-01
This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables
A Method for Modeling of Floating Vertical Axis Wind Turbine
DEFF Research Database (Denmark)
Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir
2013-01-01
It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine....... This integrated dynamic model takes into account the wind inflow, aerodynamics, hydrodynamics, structural dynamics (wind turbine, floating platform and the mooring lines) and a generator control. This approach calculates dynamic equilibrium at each time step and takes account of the interaction between the rotor...
Research on Splicing Method of Digital Relic Fragment Model
Yan, X.; Hu, Y.; Hou, M.
2018-04-01
In the course of archaeological excavation, a large number of pieces of cultural relics were unearthed, and the restoration of these fragments was done manually by traditional arts and crafts experts. In this process, cultural relics experts often try to splice the existing cultural relics, and then use adhesive to stick together the fragments of correct location, which will cause irreversible secondary damage to cultural relics. In order to minimize such damage, the surveyors combine 3D laser scanning with computer technology, and use the method of establishing digital cultural relics fragments model to make virtual splicing of cultural relics. The 3D software on the common market can basically achieve the model translation and rotation, using this two functions can be achieved manually splicing between models, mosaic records after the completion of the specific location of each piece of fragments, so as to effectively reduce the damage to the relics had tried splicing process.
Methods to model-check parallel systems software
International Nuclear Information System (INIS)
Matlin, O. S.; McCune, W.; Lusk, E.
2003-01-01
We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
IMAGE TO POINT CLOUD METHOD OF 3D-MODELING
Directory of Open Access Journals (Sweden)
A. G. Chibunichev
2012-07-01
Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
Multiscale modeling of porous ceramics using movable cellular automaton method
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Reflexion on linear regression trip production modelling method for ensuring good model quality
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Modeling of Unsteady Flow through the Canals by Semiexact Method
Directory of Open Access Journals (Sweden)
Farshad Ehsani
2014-01-01
Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.
Microstrip natural wave spectrum mathematical model using partial inversion method
International Nuclear Information System (INIS)
Pogarsky, S.A.; Litvinenko, L.N.; Prosvirnin, S.L.
1995-01-01
It is generally agreed that both microstrip lines itself and different discontinuities based on microstrips are the most difficult problem for accurate electrodynamic analysis. Over the last years much has been published about principles and accurate (or full wave) methods of microstrip lines investigations. The growing interest for this problem may be explained by the microstrip application in the millimeter-wave range for purpose of realizing interconnects and a variety of passive components. At these higher operating rating frequencies accurate component modeling becomes more critical. A creation, examination and experimental verification of the accurate method for planar electrodynamical structures natural wave spectrum investigations are the objects of this manuscript. The moment method with partial inversion operator method using may be considered as a basical way for solving this problem. This method is outlook for accurate analysis of different planar discontinuities in microstrip: such as step discontinuities, microstrip turns, Y- and X-junctions and etc., substrate space steps dielectric constants and other anisotropy types
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
Dynamic airspace configuration method based on a weighted graph model
Directory of Open Access Journals (Sweden)
Chen Yangzhou
2014-08-01
Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Multi-level decision making models, methods and applications
Zhang, Guangquan; Gao, Ya
2015-01-01
This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.
Investigating the performance of directional boundary layer model through staged modeling method
Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee
2011-04-01
Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the
Modeling cometary photopolarimetric characteristics with Sh-matrix method
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
Modelling of complex heat transfer systems by the coupling method
Energy Technology Data Exchange (ETDEWEB)
Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))
1985-04-01
The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.
Modeling patient safety incidents knowledge with the Categorial Structure method.
Souvignet, Julien; Bousquet, Cédric; Lewalle, Pierre; Trombert-Paviot, Béatrice; Rodrigues, Jean Marie
2011-01-01
Following the WHO initiative named World Alliance for Patient Safety (PS) launched in 2004 a conceptual framework developed by PS national reporting experts has summarized the knowledge available. As a second step, the Department of Public Health of the University of Saint Etienne team elaborated a Categorial Structure (a semi formal structure not related to an upper level ontology) identifying the elements of the semantic structure underpinning the broad concepts contained in the framework for patient safety. This knowledge engineering method has been developed to enable modeling patient safety information as a prerequisite for subsequent full ontology development. The present article describes the semantic dissection of the concepts, the elicitation of the ontology requirements and the domain constraints of the conceptual framework. This ontology includes 134 concepts and 25 distinct relations and will serve as basis for an Information Model for Patient Safety.
Optimization of Excitation in FDTD Method and Corresponding Source Modeling
Directory of Open Access Journals (Sweden)
B. Dimitrijevic
2015-04-01
Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.
Use of results from microscopic methods in optical model calculations
International Nuclear Information System (INIS)
Lagrange, C.
1985-11-01
A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr
Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.
Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K
2017-11-01
Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modelling a gamma irradiation process using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
Direct numerical methods of mathematical modeling in mechanical structural design
International Nuclear Information System (INIS)
Sahili, Jihad; Verchery, Georges; Ghaddar, Ahmad; Zoaeter, Mohamed
2002-01-01
Full text.Structural design and numerical methods are generally interactive; requiring optimization procedures as the structure is analyzed. This analysis leads to define some mathematical terms, as the stiffness matrix, which are resulting from the modeling and then used in numerical techniques during the dimensioning procedure. These techniques and many others involve the calculation of the generalized inverse of the stiffness matrix, called also the 'compliance matrix'. The aim of this paper is to introduce first, some different existing mathematical procedures, used to calculate the compliance matrix from the stiffness matrix, then apply direct numerical methods to solve the obtained system with the lowest computational time, and to compare the obtained results. The results show a big difference of the computational time between the different procedures
Nuclear fuel cycle optimization - methods and modelling techniques
International Nuclear Information System (INIS)
Silvennoinen, P.
1982-01-01
This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Energy Technology Data Exchange (ETDEWEB)
Prinn, Ronald [MIT; Webster, Mort [MIT
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Modelling a gamma irradiation process using the Monte Carlo method
International Nuclear Information System (INIS)
Soares, Gabriela A.; Pereira, Marcio T.
2011-01-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
Modified network simulation model with token method of bus access
Directory of Open Access Journals (Sweden)
L.V. Stribulevich
2013-08-01
Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Modeling intraindividual variability with repeated measures data methods and applications
Hershberger, Scott L
2013-01-01
This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp
A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model
Directory of Open Access Journals (Sweden)
WenBo Xie
2014-01-01
Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.
3D virtual human rapid modeling method based on top-down modeling mechanism
Directory of Open Access Journals (Sweden)
LI Taotao
2017-01-01
Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.
Sedukhin, V. V.; Anikeev, A. N.; Chumanov, I. V.
2017-11-01
Method optimizes hardening working layer parts’, working in high-abrasive conditions looks in this work: bland refractory particles WC and TiC in respect of 70/30 wt. % prepared by beforehand is applied on polystyrene model in casting’ mould. After metal poured in mould, withstand for crystallization, and then a study is carried out. Study macro- and microstructure received samples allows to say that thickness and structure received hardened layer depends on duration interactions blend harder carbides and liquid metal. Different character interactions various dispersed particles and matrix metal observed under the same conditions. Tests abrasive wear resistance received materials of method calculating residual masses was conducted in laboratory’ conditions. Results research wear resistance showed about that method obtaining harder coating of blend carbide tungsten and carbide titanium by means of drawing on surface foam polystyrene model before moulding, allows receive details with surface has wear resistance in 2.5 times higher, than details of analogy steel uncoated. Wherein energy costs necessary for transformation units mass’ substances in powder at obtained harder layer in 2.06 times higher, than materials uncoated.
OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY
Directory of Open Access Journals (Sweden)
TĂNĂSESCU ANA
2014-05-01
Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.
Modeling granular phosphor screens by Monte Carlo methods
International Nuclear Information System (INIS)
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-01-01
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd 2 O 2 S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd 2 O 2 S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd 2 O 2 S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
Modeling the Performance of Fast Mulipole Method on HPC platforms
Ibeid, Huda
2012-04-06
The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.
Tail modeling in a stretched magnetosphere 1. Methods and transformations
International Nuclear Information System (INIS)
Stern, D.P.
1987-01-01
A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because delxB = 0 must be maintained,such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional ''stretch transformation'' along the x axis, a generalization of a method introduced by Voigt. The transformation is concisely represented by the ''stretch function'' f(x), which is also a convenient tool for representing features of the substorm cycle. Onedimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is next shown that distortion transformations can be viewed as mappings of field lines from one pattern to another: Euler potentials are used in the derivation, but the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are then derived,and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered. copyrightAmerican Geophysical Union 1987
Energy Technology Data Exchange (ETDEWEB)
Sato, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center
1997-05-27
Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.
Directory of Open Access Journals (Sweden)
E Haji Nejad
2001-06-01
Full Text Available Difference aspects of multinomial statistical modelings and its classifications has been studied so far. In these type of problems Y is the qualitative random variable with T possible states which are considered as classifications. The goal is prediction of Y based on a random Vector X ? IR^m. Many methods for analyzing these problems were considered. One of the modern and general method of classification is Classification and Regression Trees (CART. Another method is recursive partitioning techniques which has a strange relationship with nonparametric regression. Classical discriminant analysis is a standard method for analyzing these type of data. Flexible discriminant analysis method which is a combination of nonparametric regression and discriminant analysis and classification using spline that includes least square regression and additive cubic splines. Neural network is an advanced statistical method for analyzing these types of data. In this paper properties of multinomial logistics regression were investigated and this method was used for modeling effective factors in selecting contraceptive methods in Ghom province for married women age 15-49. The response variable has a tetranomial distibution. The levels of this variable are: nothing, pills, traditional and a collection of other contraceptive methods. A collection of significant independent variables were: place, age of women, education, history of pregnancy and family size. Menstruation age and age at marriage were not statistically significant.
A copula method for modeling directional dependence of genes
Directory of Open Access Journals (Sweden)
Park Changyi
2008-05-01
Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared
Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.
Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok
2018-06-01
To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by
Space Environment Modelling with the Use of Artificial Intelligence Methods
Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.
1996-12-01
Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore
Computational Methods for Physical Model Information Management: Opening the Aperture
International Nuclear Information System (INIS)
Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.
2015-01-01
The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)
Modeling of NiTiHf using finite difference method
Farjam, Nazanin; Mehrabi, Reza; Karaca, Haluk; Mirzaeifar, Reza; Elahinia, Mohammad
2018-03-01
NiTiHf is a high temperature and high strength shape memory alloy with transformation temperatures above 100oC. A constitutive model based on Gibbs free energy is developed to predict the behavior of this material. Two different irrecoverable strains including transformation induced plastic strain (TRIP) and viscoplastic strain (VP) are considered when using high temperature shape memory alloys (HTSMAs). The first one happens during transformation at high levels of stress and the second one is related to the creep which is rate-dependent. The developed model is implemented for NiTiHf under uniaxial loading. Finite difference method is utilized to solve the proposed equations. The material parameters in the equations are calibrated from experimental data. Simulation results are captured to investigate the superelastic behavior of NiTiHf. The extracted results are compared with experimental tests of isobaric heating and cooling at different levels of stress and also superelastic tests at different levels of temperature. More results are generated to investigate the capability of the proposed model in the prediction of the irrecoverable strain after full transformation in HTSMAs.
Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods
Directory of Open Access Journals (Sweden)
Lee Sael
2013-10-01
Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.
A robust absorbing layer method for anisotropic seismic wave modeling
Energy Technology Data Exchange (ETDEWEB)
Métivier, L., E-mail: ludovic.metivier@ujf-grenoble.fr [LJK, CNRS, Université de Grenoble, BP 53, 38041 Grenoble Cedex 09 (France); ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France); Brossier, R. [ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France); Labbé, S. [LJK, CNRS, Université de Grenoble, BP 53, 38041 Grenoble Cedex 09 (France); Operto, S. [Géoazur, Université de Nice Sophia-Antipolis, CNRS, IRD, OCA, Villefranche-sur-Mer (France); Virieux, J. [ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France)
2014-12-15
When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped.
A robust absorbing layer method for anisotropic seismic wave modeling
International Nuclear Information System (INIS)
Métivier, L.; Brossier, R.; Labbé, S.; Operto, S.; Virieux, J.
2014-01-01
When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped
Application of blocking diagnosis methods to general circulation models. Part II: model simulations
Energy Technology Data Exchange (ETDEWEB)
Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)
2010-12-15
A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the
Theoretical Modelling Methods for Thermal Management of Batteries
Directory of Open Access Journals (Sweden)
Bahman Shabani
2015-09-01
Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.
Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering
1996-05-01
Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.
METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS
Directory of Open Access Journals (Sweden)
Александр Иванович МЕНЕЙЛЮК
2016-02-01
Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.
[Hierarchy structuring for mammography technique by interpretive structural modeling method].
Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko
2009-10-20
Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.
Engineering models and methods for industrial cell control
DEFF Research Database (Denmark)
Lynggaard, Hans Jørgen Birk; Alting, Leo
1997-01-01
This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based...... SHIPYARD.It is concluded that cell control technology provides for increased performance in production systems, and that the Cell Control Engineering concept reduces the effort for providing and operating high quality and high functionality cell control solutions for the industry....... control and monitor-ing systems for production cells. The project participants are The Danish Academy of Technical Sciences, the Institute of Manufacturing Engineering at the Technical University of Denmark and ODENSE STEEL SHIPYARD Ltd.The manufacturing environment and the current practice...
Methods of Modelling Marketing Activity on Software Sales
Directory of Open Access Journals (Sweden)
Bashirov Islam H.
2013-11-01
Full Text Available The article studies a topical issue of development of methods of modelling marketing activity on software sales for achievement of efficient functioning of an enterprise. On the basis of analysis of the market type for the studied CloudLinux OS product, the article identifies the market structure type: monopolistic competition. To ensure the information basis of the marketing activity in the target market segment, the article offers the survey method. The article provides a questionnaire, which contains specific questions regarding the studied market segment of hosting services, for an online survey with the help of the Survio service. In accordance with the system approach the CloudLinux OS has properties of systems, namely, diversity. Economic differences are non-price indicators that have no numeric expression and are quality descriptions. Analysis of the market and the conducted survey allow obtaining them. Combination of price and non-price indicators provides a complete description of the product properties. To calculate an integral indicator of competitiveness the article offers to apply a model, which is based on the direct algebraic addition of weight measures of individual indicators, regulation of formalised indicators and use of the mechanism of fuzzy sets for identification of non-formalised indicators. The calculated indicator allows not only assessment of the current level of competitiveness, but also identification of influence of changes of various indicators, which allows increase of efficiency of marketing decisions. Also, having identified the target customers of hosting OS and formalised non-price parameters, it is possible to conduct the search for a set of optimal characteristics of the product. In the result an optimal strategy of the product advancement to the market is formed.
Non linear permanent magnets modelling with the finite element method
International Nuclear Information System (INIS)
Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.
1989-01-01
In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter
Hybrid CMS methods with model reduction for assembly of structures
Farhat, Charbel
1991-01-01
Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.
Preequilibrium decay models and the quantum Green function method
International Nuclear Information System (INIS)
Zhivopistsev, F.A.; Rzhevskij, E.S.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow. Inst. Teoreticheskoj i Ehksperimental'noj Fiziki)
1977-01-01
The nuclear process mechanism and preequilibrium decay involving complex particles are expounded on the basis of the Green function formalism without the weak interaction assumptions. The Green function method is generalized to a general nuclear reaction: A+α → B+β+γ+...rho, where A is the target nucleus, α is a complex particle in the initial state, B is the final nucleus, and β, γ, ... rho are nuclear fragments in the final state. The relationship between the generalized Green function and Ssub(fi)-matrix is established. The resultant equations account for: 1) direct and quasi-direct processes responsible for the angular distribution asymmetry of the preequilibrium component; 2) the appearance of addends corresponding to the excitation of complex states of final nucleus; and 3) the relationship between the preequilibrium decay model and the general models of nuclear reaction theories (Lippman-Schwinger formalism). The formulation of preequilibrium emission via the S(T) matrix allows to account for all the differential terms in succession important to an investigation of the angular distribution assymetry of emitted particles
Three-Component Forward Modeling for Transient Electromagnetic Method
Directory of Open Access Journals (Sweden)
Bin Xiong
2010-01-01
Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.
Modeling local extinction in turbulent combustion using an embedding method
Knaus, Robert; Pantano, Carlos
2012-11-01
Local regions of extinction in diffusion flames, called ``flame holes,'' can reduce the efficiency of combustion and increase the production of certain pollutants. At sufficiently high speeds, a flame may also be lifted from the rim of the burner to a downstream location that may be stable. These two phenomena share a common underlying mechanism of propagation related to edge-flame dynamics where chemistry and fluid mechanics are equally important. We present a formulation that describes the formation, propagation, and growth of flames holes on the stoichiometric surface using edge flame dynamics. The boundary separating the flame from the quenched region is modeled using a progress variable defined on the moving stoichiometric surface that is embedded in the three-dimensional space using an extension algorithm. This Cartesian problem is solved using a high-order finite-volume WENO method extended to this nonconservative problem. This algorithm can track the dynamics of flame holes in a turbulent reacting-shear layer and model flame liftoff without requiring full chemistry calculations.
Biologic data, models, and dosimetric methods for internal emitters
International Nuclear Information System (INIS)
Weber, D.A.
1990-01-01
The absorbed radiation dose from internal emitters has been and will remain a pivotal factor in assessing risk and therapeutic utility in selecting radiopharmaceuticals for diagnosis and treatment. Although direct measurements of absorbed dose and dose distributions in vivo have been and will continue to be made in limited situations, the measurement of the biodistribution and clearance of radiopharmaceuticals in human subjects and the use of this data is likely to remain the primary means to approach the calculation and estimation of absorbed dose from internal emitters over the next decade. Since several approximations are used in these schema to calculate dose, attention must be given to inspecting and improving the application of this dosimetric method as better techniques are developed to assay body activity and as more experience is gained in applying these schema to calculating absorbed dose. Discussion of the need for considering small scale dosimetry to calculate absorbed dose at the cellular level will be presented in this paper. Other topics include dose estimates for internal emitters, biologic data mathematical models and dosimetric methods employed. 44 refs
Mathematical modellings and computational methods for structural analysis of LMFBR's
International Nuclear Information System (INIS)
Liu, W.K.; Lam, D.
1983-01-01
In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)
Methods for MHC genotyping in non-model vertebrates.
Babik, W
2010-03-01
Genes of the major histocompatibility complex (MHC) are considered a paradigm of adaptive evolution at the molecular level and as such are frequently investigated by evolutionary biologists and ecologists. Accurate genotyping is essential for understanding of the role that MHC variation plays in natural populations, but may be extremely challenging. Here, I discuss the DNA-based methods currently used for genotyping MHC in non-model vertebrates, as well as techniques likely to find widespread use in the future. I also highlight the aspects of MHC structure that are relevant for genotyping, and detail the challenges posed by the complex genomic organization and high sequence variation of MHC loci. Special emphasis is placed on designing appropriate PCR primers, accounting for artefacts and the problem of genotyping alleles from multiple, co-amplifying loci, a strategy which is frequently necessary due to the structure of the MHC. The suitability of typing techniques is compared in various research situations, strategies for efficient genotyping are discussed and areas of likely progress in future are identified. This review addresses the well established typing methods such as the Single Strand Conformation Polymorphism (SSCP), Denaturing Gradient Gel Electrophoresis (DGGE), Reference Strand Conformational Analysis (RSCA) and cloning of PCR products. In addition, it includes the intriguing possibility of direct amplicon sequencing followed by the computational inference of alleles and also next generation sequencing (NGS) technologies; the latter technique may, in the future, find widespread use in typing complex multilocus MHC systems. © 2009 Blackwell Publishing Ltd.
Comparison of parametric methods for modeling corneal surfaces
Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean
2017-02-01
Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.
Nonperturbative stochastic method for driven spin-boson model
Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn
2013-01-01
We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
a Range Based Method for Complex Facade Modeling
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
A RANGE BASED METHOD FOR COMPLEX FACADE MODELING
Directory of Open Access Journals (Sweden)
A. Adami
2012-09-01
homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.
Studies on sulfate attack: Mechanisms, test methods, and modeling
Santhanam, Manu
The objective of this research study was to investigate various issues pertaining to the mechanism, testing methods, and modeling of sulfate attack in concrete. The study was divided into the following segments: (1) effect of gypsum formation on the expansion of mortars, (2) attack by the magnesium ion, (3) sulfate attack in the presence of chloride ions---differentiating seawater and groundwater attack, (4) use of admixtures to mitigate sulfate attack---entrained air, sodium citrate, silica fume, and metakaolin, (5) effects of temperature and concentration of the attack solution, (6) development of new test methods using concrete specimens, and (7) modeling of the sulfate attack phenomenon. Mortar specimens using portland cement (PC) and tricalcium silicate (C 3S), with or without mineral admixtures, were prepared and immersed in different sulfate solutions. In addition to this, portland cement concrete specimens were also prepared and subjected to complete and partial immersion in sulfate solutions. Physical measurements, chemical analyses and microstructural studies were performed periodically on the specimens. Gypsum formation was seen to cause expansion of the C3S mortar specimens. Statistical analyses of the data also indicated that the quantity of gypsum was the most significant factor controlling the expansion of mortar bars. The attack by magnesium ion was found to drive the reaction towards the formation of brucite. Decalcification of the C-S-H and its subsequent conversion to the non-cementitious M-S-H was identified as the mechanism of destruction in magnesium sulfate attack. Mineral admixtures were beneficial in combating sodium sulfate attack, while reducing the resistance to magnesium sulfate attack. Air entrainment did not change the measured physical properties, but reduced the visible distress of the mortars. Sodium citrate caused a substantial reduction in the rate of damage of the mortars due to its retarding effect. Temperature and
Deformation data modeling through numerical models: an efficient method for tracking magma transport
Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.
2017-12-01
Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.
CAD-based Monte Carlo automatic modeling method based on primitive solid
International Nuclear Information System (INIS)
Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang
2016-01-01
Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.
Data Mining Methods to Generate Severe Wind Gust Models
Directory of Open Access Journals (Sweden)
Subana Shanmuganathan
2014-01-01
Full Text Available Gaining knowledge on weather patterns, trends and the influence of their extremes on various crop production yields and quality continues to be a quest by scientists, agriculturists, and managers. Precise and timely information aids decision-making, which is widely accepted as intrinsically necessary for increased production and improved quality. Studies in this research domain, especially those related to data mining and interpretation are being carried out by the authors and their colleagues. Some of this work that relates to data definition, description, analysis, and modelling is described in this paper. This includes studies that have evaluated extreme dry/wet weather events against reported yield at different scales in general. They indicate the effects of weather extremes such as prolonged high temperatures, heavy rainfall, and severe wind gusts. Occurrences of these events are among the main weather extremes that impact on many crops worldwide. Wind gusts are difficult to anticipate due to their rapid manifestation and yet can have catastrophic effects on crops and buildings. This paper examines the use of data mining methods to reveal patterns in the weather conditions, such as time of the day, month of the year, wind direction, speed, and severity using a data set from a single location. Case study data is used to provide examples of how the methods used can elicit meaningful information and depict it in a fashion usable for management decision making. Historical weather data acquired between 2008 and 2012 has been used for this study from telemetry devices installed in a vineyard in the north of New Zealand. The results show that using data mining techniques and the local weather conditions, such as relative pressure, temperature, wind direction and speed recorded at irregular intervals, can produce new knowledge relating to wind gust patterns for vineyard management decision making.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
Detection of Internal Short Circuit in Lithium Ion Battery Using Model-Based Switching Model Method
Directory of Open Access Journals (Sweden)
Minhwan Seo
2017-01-01
Full Text Available Early detection of an internal short circuit (ISCr in a Li-ion battery can prevent it from undergoing thermal runaway, and thereby ensure battery safety. In this paper, a model-based switching model method (SMM is proposed to detect the ISCr in the Li-ion battery. The SMM updates the model of the Li-ion battery with ISCr to improve the accuracy of ISCr resistance R I S C f estimates. The open circuit voltage (OCV and the state of charge (SOC are estimated by applying the equivalent circuit model, and by using the recursive least squares algorithm and the relation between OCV and SOC. As a fault index, the R I S C f is estimated from the estimated OCVs and SOCs to detect the ISCr, and used to update the model; this process yields accurate estimates of OCV and R I S C f . Then the next R I S C f is estimated and used to update the model iteratively. Simulation data from a MATLAB/Simulink model and experimental data verify that this algorithm shows high accuracy of R I S C f estimates to detect the ISCr, thereby helping the battery management system to fulfill early detection of the ISCr.
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Neural node network and model, and method of teaching same
Parlos, Alexander G. (Inventor); Atiya, Amir F. (Inventor); Fernandez, Benito (Inventor); Tsai, Wei K. (Inventor); Chong, Kil T. (Inventor)
1995-01-01
The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.
Methods for Geometric Data Validation of 3d City Models
Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2015-12-01
Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Systematic Methods and Tools for Computer Aided Modelling
DEFF Research Database (Denmark)
Fedorova, Marina
and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary
Power systems with nuclear-electric generators - Modelling methods
International Nuclear Information System (INIS)
Valeca, Serban Constantin
2002-01-01
This is a vast analysis on the issue of sustainable nuclear power development with direct conclusions regarding the Nuclear Programme of Romania. The work is targeting specialists and decision making boards. Specific to the nuclear power development is its public implication, the public being most often misinformed by non-professional media. The following problems are debated thoroughly: - safety, nuclear risk, respectively, is treated in chapter 1 and 7 aiming at highlighting the quality of nuclear power and consequently paving the way to public acceptance; - the environment considered both as resource of raw materials and medium essential for life continuation, which should be appropriately protected to ensure healthy and sustainable development of human society; its analysis is also presented in chapter 1 and 7, where the problem of safe management of radioactive waste is addressed too; - investigation methods based on information science of nuclear systems, applied in carrying out the nuclear strategy and planning are widely analyzed in the chapter 2, 3 and 6; - optimizing the processes by following up the structure of investment and operation costs, and, generally, the management of nuclear units is treated in the chapter 5 and 7; - nuclear weapon proliferation as a possible consequence of nuclear power generation is treated as a legal issue. The development of Romanian NPP at Cernavoda, practically, the core of the National Nuclear Programme, is described in chapter 8. Actually, the originality of the present work consists in the selection and adaptation from a multitude of mathematical models applicable to the local and specific conditions of nuclear power plant at Cernavoda. The Romanian economy development and power development oriented towards reduction of fossil fuel consumption and protection of environment, most reliably ensured by the nuclear power, is discussed in the frame of the world trends of the energy production. Various scenarios are
Pursuing the method of multiple working hypotheses for hydrological modeling
Clark, M.P.; Kavetski, D.; Fenicia, F.
2011-01-01
Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
Methods for modeling chinese hamster ovary (cho) cell metabolism
DEFF Research Database (Denmark)
2015-01-01
Embodiments of the present invention generally relate to the computational analysis and characterization biological networks at the cellular level in Chinese Hamster Ovary (CHO) cells. Based on computational methods utilizing a hamster reference genome, the invention provides methods for identify...
A Comprehensive Method for Comparing Mental Models of Dynamic Systems
Schaffernicht, Martin; Grösser, Stefan N.
2011-01-01
Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...
Improved modeling of clinical data with kernel methods.
Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart
2012-02-01
Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems
On an Estimation Method for an Alternative Fractionally Cointegrated Model
DEFF Research Database (Denmark)
Carlini, Federico; Łasak, Katarzyna
In this paper we consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than models proposed in Granger (1986) and Johansen (2008, 2009). We discuss the identification issues of the model of Avarucci (2007), following th...
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
Bayesian inference method for stochastic damage accumulation modeling
International Nuclear Information System (INIS)
Jiang, Xiaomo; Yuan, Yong; Liu, Xian
2013-01-01
Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods
DEFF Research Database (Denmark)
Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin
2013-01-01
Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....
Modern methods in collisional-radiative modeling of plasmas
2016-01-01
This book provides a compact yet comprehensive overview of recent developments in collisional-radiative (CR) modeling of laboratory and astrophysical plasmas. It describes advances across the entire field, from basic considerations of model completeness to validation and verification of CR models to calculation of plasma kinetic characteristics and spectra in diverse plasmas. Various approaches to CR modeling are presented, together with numerous examples of applications. A number of important topics, such as atomic models for CR modeling, atomic data and its availability and quality, radiation transport, non-Maxwellian effects on plasma emission, ionization potential lowering, and verification and validation of CR models, are thoroughly addressed. Strong emphasis is placed on the most recent developments in the field, such as XFEL spectroscopy. Written by leading international research scientists from a number of key laboratories, the book offers a timely summary of the most recent progress in this area. It ...
Numerical Modelling of the Special Light Source with Novel R-FEM Method
Directory of Open Access Journals (Sweden)
Pavel Fiala
2008-01-01
Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.
Topic models: A novel method for modeling couple and family text data
Atkins, David C.; Rubin, Tim N.; Steyvers, Mark; Doeden, Michelle A.; Baucom, Brian R.; Christensen, Andrew
2012-01-01
Couple and family researchers often collect open-ended linguistic data – either through free response questionnaire items or transcripts of interviews or therapy sessions. Because participant's responses are not forced into a set number of categories, text-based data can be very rich and revealing of psychological processes. At the same time it is highly unstructured and challenging to analyze. Within family psychology analyzing text data typically means applying a coding system, which can quantify text data but also has several limitations, including the time needed for coding, difficulties with inter-rater reliability, and defining a priori what should be coded. The current article presents an alternative method for analyzing text data called topic models (Steyvers & Griffiths, 2006), which has not yet been applied within couple and family psychology. Topic models have similarities with factor analysis and cluster analysis in that topic models identify underlying clusters of words with semantic similarities (i.e., the “topics”). In the present article, a non-technical introduction to topic models is provided, highlighting how these models can be used for text exploration and indexing (e.g., quickly locating text passages that share semantic meaning) and how output from topic models can be used to predict behavioral codes or other types of outcomes. Throughout the article a collection of transcripts from a large couple therapy trial (Christensen et al., 2004) is used as example data to highlight potential applications. Practical resources for learning more about topic models and how to apply them are discussed. PMID:22888778
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.
Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L
2016-01-01
The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.
Age replacement models: A summary with new perspectives and methods
International Nuclear Information System (INIS)
Zhao, Xufeng; Al-Khalifa, Khalifa N.; Magid Hamouda, Abdel; Nakagawa, Toshio
2017-01-01
Age replacement models are fundamental to maintenance theory. This paper summarizes our new perspectives and hods in age replacement models: First, we optimize the expected cost rate for a required availability level and vice versa. Second, an asymptotic model with simple calculation is proposed by using the cumulative hazard function skillfully. Third, we challenge the established theory such that preventive replacement should be non-random and only corrective replacement should be made for the unit with exponential failure. Fourth, three replacement policies with random working cycles are discussed, which are called overtime replacement, replacement first, and replacement last, respectively. Fifth, the policies of replacement first and last are formulated with general models. Sixth, age replacement is modified for the situation when the economical life cycle of the unit is a random variable with probability distribution. Finally, models of a parallel system with constant and random number of units are taken into considerations. The models of expected cost rates are obtained and optimal replacement times to minimize them are discussed analytically and computed numerically. Further studies and potential applications are also indicated at the end of discussions of the above models. - Highlights: • Optimization of cost rate for availability level is discussed and vice versa. • Asymptotic and random replacement models are discussed. • Overtime replacement, replacement first and replacement last are surveyed. • Replacement policy with random life cycle is given. • A parallel system with random number of units is modeled.
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
Chu, Chunlei; Stoffa, Paul L.
2012-01-01
sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.
2012-01-01
PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator
National Research Council Canada - National Science Library
Russell, Thomas
2000-01-01
New, improved computational methods for modeling of groundwater flow and transport have been formulated and implemented, with the intention of incorporating them as user options into the DoD Ground...
Empirical methods for modeling landscape change, ecosystem services, and biodiversity
David Lewis; Ralph. Alig
2009-01-01
The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...
The Interval Market Model in Mathematical Finance : Game Theoretic Methods
Bernhard, P.; Engwerda, J.C.; Roorda, B.; Schumacher, J.M.; Kolokoltsov, V.; Saint-Pierre, P.; Aubin, J.P.
2013-01-01
Toward the late 1990s, several research groups independently began developing new, related theories in mathematical finance. These theories did away with the standard stochastic geometric diffusion “Samuelson” market model (also known as the Black-Scholes model because it is used in that most famous
Involving stakeholders in building integrated fisheries models using Bayesian methods
DEFF Research Database (Denmark)
Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari
2013-01-01
the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...
Compositions and methods for modeling Saccharomyces cerevisiae metabolism
DEFF Research Database (Denmark)
2012-01-01
The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions, and comma...
An Instructional Method for the AutoCAD Modeling Environment.
Mohler, James L.
1997-01-01
Presents a command organizer for AutoCAD to aid new uses in operating within the 3-D modeling environment. Addresses analyzing the problem, visualization skills, nonlinear tools, a static view of a dynamic model, the AutoCAD organizer, environment attributes, and control of the environment. Contains 11 references. (JRH)
Decision support for natural resource management; models and evaluation methods
Wessels, J.; Makowski, M.; Nakayama, H.
2001-01-01
When managing natural resources or agrobusinesses, one always has to deal with autonomous processes. These autonomous processes play a core role in designing model-based decision support systems. This chapter tries to give insight into the question of which types of models might be used in which
An improved cellular automaton method to model multispecies biofilms.
Tang, Youneng; Valocchi, Albert J
2013-10-01
Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.
Assessing numerical methods used in nuclear aerosol transport models
International Nuclear Information System (INIS)
McDonald, B.H.
1987-01-01
Several computer codes are in use for predicting the behaviour of nuclear aerosols released into containment during postulated accidents in water-cooled reactors. Each of these codes uses numerical methods to discretize and integrate the equations that govern the aerosol transport process. Computers perform only algebraic operations and generate only numbers. It is in the numerical methods that sense can be made of these numbers and where they can be related to the actual solution of the equations. In this report, the numerical methods most commonly used in the aerosol transport codes are examined as special cases of a general solution procedure, the Method of Weighted Residuals. It would appear that the numerical methods used in the codes are all capable of producing reasonable answers to the mathematical problem when used with skill and care. 27 refs
New Methods for Kinematic Modelling and Calibration of Robots
DEFF Research Database (Denmark)
Søe-Knudsen, Rune
2014-01-01
the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring......Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...
Sparse QSAR modelling methods for therapeutic and regenerative medicine
Winkler, David A.
2018-02-01
The quantitative structure-activity relationships method was popularized by Hansch and Fujita over 50 years ago. The usefulness of the method for drug design and development has been shown in the intervening years. As it was developed initially to elucidate which molecular properties modulated the relative potency of putative agrochemicals, and at a time when computing resources were scarce, there is much scope for applying modern mathematical methods to improve the QSAR method and to extending the general concept to the discovery and optimization of bioactive molecules and materials more broadly. I describe research over the past two decades where we have rebuilt the unit operations of the QSAR method using improved mathematical techniques, and have applied this valuable platform technology to new important areas of research and industry such as nanoscience, omics technologies, advanced materials, and regenerative medicine. This paper was presented as the 2017 ACS Herman Skolnik lecture.
CAD ACTIVE MODELS: AN INNOVATIVE METHOD IN ASSEMBLY ENVIRONMENT
Directory of Open Access Journals (Sweden)
NADDEO Alessandro
2010-07-01
Full Text Available The aim of this work is to show the use and the versatility of the active models in different applications. It has been realized an active model of a cylindrical spring and it has been applied in two mechanisms, different for typology and for backlash loads. The first example is a dynamometer in which nthe cylindrical spring is loaded by traction forces, while the second example is made up from a pressure valve in which the cylindrical-conic spring works under compression. The imposition of the loads in both cases, has allowed us to evaluate the model of the mechanism in different working conditions, also in assembly environment.
Modelling and simulation of diffusive processes methods and applications
Basu, SK
2014-01-01
This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport
Congestion cost allocation method in a pool model
International Nuclear Information System (INIS)
Jung, H.S.; Hur, D.; Park, J.K.
2003-01-01
The congestion cost caused by transmission capacities and voltage limit is an important issue in a competitive electricity market. To allocate the congestion cost equitably, the active constraints in a constrained dispatch and the sequence of these constraints should be considered. A multi-stage method is proposed which reflects the effects of both the active constraints and the sequence. In a multi-stage method, the types of congestion are analysed in order to consider the sequence, and the relationship between congestion and the active constraints is derived in a mathematical way. The case study shows that the proposed method can give more accurate and equitable signals to customers. (Author)
Model independent method to deconvolve hard X-ray spectra
Energy Technology Data Exchange (ETDEWEB)
Polcaro, V.F.; Bazzano, A.; Ubertini, P.; La Padula, C. (Consiglio Nazionale delle Ricerche, Frascati (Italy). Lab. di Astrofisica Spaziale); Manchanda, R.K. (Tata Inst. of Fundamental Research, Bombay (India))
1984-07-01
A general purpose method to deconvolve the energy spectra detected by means of the use of a hard X-ray telescope is described. The procedure does not assume any form of input spectrum and the observed energy loss spectrum is directly deconvolved into the incident photon spectrum, the form of which can be determined independently of physical interpretation of the data. Deconvolution of the hard X-ray spectrum of Her X-1, detected during the HXR 81M experiment, by the method independent method is presented.
Semigroup Method on a MX/G/1 Queueing Model
Directory of Open Access Journals (Sweden)
Alim Mijit
2013-01-01
Full Text Available By using the Hille-Yosida theorem, Phillips theorem, and Fattorini theorem in functional analysis we prove that the MX/G/1 queueing model with vacation times has a unique nonnegative time-dependent solution.
Modeling the Performance of Fast Mulipole Method on HPC platforms
Ibeid, Huda
2012-01-01
In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis
A public health decision support system model using reasoning methods.
Mera, Maritza; González, Carolina; Blobel, Bernd
2015-01-01
Public health programs must be based on the real health needs of the population. However, the design of efficient and effective public health programs is subject to availability of information that can allow users to identify, at the right time, the health issues that require special attention. The objective of this paper is to propose a case-based reasoning model for the support of decision-making in public health. The model integrates a decision-making process and case-based reasoning, reusing past experiences for promptly identifying new population health priorities. A prototype implementation of the model was performed, deploying the case-based reasoning framework jColibri. The proposed model contributes to solve problems found today when designing public health programs in Colombia. Current programs are developed under uncertain environments, as the underlying analyses are carried out on the basis of outdated and unreliable data.
Adaptive Maneuvering Frequency Method of Current Statistical Model
Institute of Scientific and Technical Information of China (English)
Wei Sun; Yongjian Yang
2017-01-01
Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.
On beam propagation methods for modelling in integrated optics
Hoekstra, Hugo
1997-01-01
In this paper the main features of the Fourier transform and finite difference beam propagation methods are summarized. Limitations and improvements, related to the paraxial approximation, finite differencing and tilted structures are discussed.
Review of methods for modelling forest fire risk and hazard
African Journals Online (AJOL)
user
-Leal et al., 2006). Stolle and Lambin (2003) noted that flammable fuel depends on ... advantages over conventional fire detection and fire monitoring methods because ofits repetitive andconsistent coverage over large areas of land (Martin et ...
A numerical method for eigenvalue problems in modeling liquid crystals
Energy Technology Data Exchange (ETDEWEB)
Baglama, J.; Farrell, P.A.; Reichel, L.; Ruttan, A. [Kent State Univ., OH (United States); Calvetti, D. [Stevens Inst. of Technology, Hoboken, NJ (United States)
1996-12-31
Equilibrium configurations of liquid crystals in finite containments are minimizers of the thermodynamic free energy of the system. It is important to be able to track the equilibrium configurations as the temperature of the liquid crystals decreases. The path of the minimal energy configuration at bifurcation points can be computed from the null space of a large sparse symmetric matrix. We describe a new variant of the implicitly restarted Lanczos method that is well suited for the computation of extreme eigenvalues of a large sparse symmetric matrix, and we use this method to determine the desired null space. Our implicitly restarted Lanczos method determines adoptively a polynomial filter by using Leja shifts, and does not require factorization of the matrix. The storage requirement of the method is small, and this makes it attractive to use for the present application.
Diffusion models in metamorphic thermo chronology: philosophy and methods
International Nuclear Information System (INIS)
Munha, Jose Manuel; Tassinari, Colombo Celso Gaeta
1999-01-01
Understanding kinetics of diffusion is of major importance to the interpretation of isotopic ages in metamorphic rocks. This paper provides a review of concepts and methodologies involved on the various diffusion models that can be applied to radiogenic systems in cooling rocks. The central concept of closure temperature is critically discussed and quantitative estimates for the various diffusion models are evaluated, in order to illustrate the controlling factors and the limits of their practical application. (author)
Study on geological environment model using geostatistics method
International Nuclear Information System (INIS)
Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya
2005-03-01
The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)
A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains
Directory of Open Access Journals (Sweden)
Kemin Wang
2014-01-01
Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.
Theory, Solution Methods, and Implementation of the HERMES Model
Energy Technology Data Exchange (ETDEWEB)
Reaugh, John E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Bradley W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Curtis, John P. [Atomic Weapons Establishment (AWE), Reading, Berkshire (United Kingdom); Univ. College London (UCL), Gower Street, London (United Kingdom); Springer, H. Keo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-07-13
The HERMES (high explosive response to mechanical stimulus) model was developed over the past decade to enable computer simulation of the mechanical and subsequent energetic response of explosives and propellants to mechanical insults such as impacts, perforations, drops, and falls. The model is embedded in computer simulation programs that solve the non-linear, large deformation equations of compressible solid and fluid flow in space and time. It is implemented as a user-defined model, which returns the updated stress tensor and composition that result from the simulation supplied strain tensor change. Although it is multi-phase, in that gas and solid species are present, it is single-velocity, in that the gas does not flow through the porous solid. More than 70 time-dependent variables are made available for additional analyses and plotting. The model encompasses a broad range of possible responses: mechanical damage with no energetic response, and a continuous spectrum of degrees of violence including delayed and prompt detonation. This paper describes the basic workings of the model.
Modeling Multi-commodity Trade Information Exchange Methods
Traczyk, Tomasz
2012-01-01
Market mechanisms are entering into new fields of economy, in which some constraints of physical world, e.g. Kirchoffs Law in power grid, must be taken into account during trading. On such markets, some of commodities, like telecommunication bandwidth or electrical energy, appear to be non-storable, and must be exchanged in real-time. On the other hand, the markets tend to react at shortest possible time, so an idea to delegate some competency to autonomous software agents is very attractive. Multi-commodity mechanism addresses the aforementioned requirements. Modeling the relationships between the commodities allows to formulate new, more sophisticated models and mechanisms, which reflect decision situations in a better manner. Application of multi-commodity approach requires solving several issues related to data modeling, communication, semantics aspects of communication, reliability, etc. This book answers some of the questions and points out promising paths for implementation and development. Presented s...
A method of shadow puppet figure modeling and animation
Institute of Scientific and Technical Information of China (English)
Xiao-fang HUANG; Shou-qian SUN; Ke-jun ZHANG; Tian-ning XU; Jian-feng WU; Bin ZHU
2015-01-01
To promote the development of the intangible cultural heritage of the world, shadow play, many studies have focused on shadow puppet modeling and interaction. Most of the shadow puppet figures are still imaginary, spread by ancients, or carved and painted by shadow puppet artists, without consideration of real dimensions or the appearance of human bodies. This study proposes an algorithm to transform 3D human models to 2D puppet figures for shadow puppets, including automatic location of feature points, automatic segmentation of 3D models, automatic extraction of 2D contours, automatic clothes matching, and animation. Experiment proves that more realistic and attractive figures and animations of the shadow puppet can be generated in real time with this algorithm.
Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis.
Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila
2017-09-27
Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.
Modelling methods for co-fired pulverised fuel furnaces
Energy Technology Data Exchange (ETDEWEB)
L. Ma; M. Gharebaghi; R. Porter; M. Pourkashanian; J.M. Jones; A. Williams [University of Leeds, Leeds (United Kingdom). Energy and Resources Research Institute
2009-12-15
Co-firing of biomass and coal can be beneficial in reducing the carbon footprint of energy production. Accurate modelling of co-fired furnaces is essential to discover potential problems that may occur during biomass firing and to mitigate potential negative effects of biomass fuels, including lower efficiency due to lower burnout and NOx formation issues. Existing coal combustion models should be modified to increase reliability of predictions for biomass, including factors such as increased drag due to non-spherical particle sizes and accounting for organic compounds and the effects they have on NOx emission. Detailed biomass co-firing models have been developed and tested for a range of biomass fuels and show promising results. 32 refs., 4 figs., 3 tabs.
Tools and Methods for RTCP-Nets Modeling and Verification
Directory of Open Access Journals (Sweden)
Szpyrka Marcin
2016-09-01
Full Text Available RTCP-nets are high level Petri nets similar to timed colored Petri nets, but with different time model and some structural restrictions. The paper deals with practical aspects of using RTCP-nets for modeling and verification of real-time systems. It contains a survey of software tools developed to support RTCP-nets. Verification of RTCP-nets is based on coverability graphs which represent the set of reachable states in the form of directed graph. Two approaches to verification of RTCP-nets are considered in the paper. The former one is oriented towards states and is based on translation of a coverability graph into nuXmv (NuSMV finite state model. The later approach is oriented towards transitions and uses the CADP toolkit to check whether requirements given as μ-calculus formulae hold for a given coverability graph. All presented concepts are discussed using illustrative examples
Stochastic fractional differential equations: Modeling, method and analysis
International Nuclear Information System (INIS)
Pedjeu, Jean-C.; Ladde, Gangaram S.
2012-01-01
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model described by a system of multi-time scale stochastic differential equations is formulated. The classical Picard–Lindelöf successive approximations scheme is applied to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this leads to the problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations of Itô–Doob type. Finally, to illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are outlined.
An immersed boundary method for modeling a dirty geometry data
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
Methods of mathematical modeling using polynomials of algebra of sets
Kazanskiy, Alexandr; Kochetkov, Ivan
2018-03-01
The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.
Toric Methods in F-Theory Model Building
Directory of Open Access Journals (Sweden)
Johanna Knapp
2011-01-01
Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.
Modelling of packet traffic with matrix analytic methods
DEFF Research Database (Denmark)
Andersen, Allan T.
1995-01-01
BISDN network. The heuristic formula did not seem to yield substantially better results than already available approximations. Finally, some results for the finite capacity BMAP/G/1 queue have been obtained. The steady state probability vector of the embedded chain is found by a direct method where...... process. A heuristic formula for the tail behaviour of a single server queue fed by a superposition of renewal processes has been evaluated. The evaluation was performed by applying Matrix Analytic methods. The heuristic formula has applications in the Call Admission Control (CAC) procedure of the future...
Tramp Ship Routing and Scheduling - Models, Methods and Opportunities
DEFF Research Database (Denmark)
Vilhelmsen, Charlotte; Larsen, Jesper; Lusby, Richard Martin
of their demand in advance. However, the detailed requirements of these contract cargoes can be subject to ongoing changes, e.g. the destination port can be altered. For tramp operators, a main concern is therefore the efficient and continuous planning of routes and schedules for the individual ships. Due...... and scheduling problem, focus should now be on extending this basic problem to include additional real-world complexities and develop suitable solution methods for those extensions. Such extensions will enable more tramp operators to benefit from the solution methods while simultaneously creating new...
A method to couple HEM and HRM two-phase flow models
Energy Technology Data Exchange (ETDEWEB)
Herard, J.M.; Hurisse, O. [Elect France, Div Rech and Dev, Dept Mecan Fluides Energies and Environm, F-78401 Chatou (France); Hurisse, O. [Univ Aix Marseille 1, Ctr Math and Informat, Lab Anal Topol and Probabil, CNRS, UMR 6632, F-13453 Marseille 13 (France); Ambroso, A. [CEA Saclay, DEN, DM2S, SFME, LETR, 91 - Gif sur Yvette (France)
2009-04-15
We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)
A method to couple HEM and HRM two-phase flow models
International Nuclear Information System (INIS)
Herard, J.M.; Hurisse, O.; Hurisse, O.; Ambroso, A.
2009-01-01
We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)
Emissions Models and Other Methods to Produce Emission Inventories
An emissions inventory is a summary or forecast of the emissions produced by a group of sources in a given time period. Inventories of air pollution from mobile sources are often produced by models such as the MOtor Vehicle Emission Simulator (MOVES).
Aligning building information model tools and construction management methods
Hartmann, Timo; van Meerveld, H.J.; Vossebeld, N.; Adriaanse, Adriaan Maria
2012-01-01
Few empirical studies exist that can explain how different Building Information Model (BIM) based tool implementation strategies work in practical contexts. To help overcoming this gap, this paper describes the implementation of two BIM based tools, the first, to support the activities at an
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Interconnected hydro-thermal systems - Models, methods, and applications
DEFF Research Database (Denmark)
Hindsberger, Magnus
2003-01-01
to be performed where the uncertainty of the inflow to the hydro reservoirs is handled endogenously. In this model snow reservoirs have been added in addition to the hydro reservoirs. Using this new approach allows sampling based decomposition algorithms to be used, which have proved to be efficient in solving...
Analysis of spin and gauge models with variational methods
International Nuclear Information System (INIS)
Dagotto, E.; Masperi, L.; Moreo, A.; Della Selva, A.; Fiore, R.
1985-01-01
Since independent-site (link) or independent-link (plaquette) variational states enhance the order or the disorder, respectively, in the treatment of spin (gauge) models, we prove that mixed states are able to improve the critical coupling while giving the qualitatively correct behavior of the relevant parameters
A Parameter Estimation Method for Dynamic Computational Cognitive Models
Thilakarathne, D.J.
2015-01-01
A dynamic computational cognitive model can be used to explore a selected complex cognitive phenomenon by providing some features or patterns over time. More specifically, it can be used to simulate, analyse and explain the behaviour of such a cognitive phenomenon. It generates output data in the
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
Numerical modeling of isothermal compositional grading by convex splitting methods
Li, Yiteng
2017-04-09
In this paper, an isothermal compositional grading process is simulated based on convex splitting methods with the Peng-Robinson equation of state. We first present a new form of gravity/chemical equilibrium condition by minimizing the total energy which consists of Helmholtz free energy and gravitational potential energy, and incorporating Lagrange multipliers for mass conservation. The time-independent equilibrium equations are transformed into a system of transient equations as our solution strategy. It is proved our time-marching scheme is unconditionally energy stable by the semi-implicit convex splitting method in which the convex part of Helmholtz free energy and its derivative are treated implicitly and the concave parts are treated explicitly. With relaxation factor controlling Newton iteration, our method is able to converge to a solution with satisfactory accuracy if a good initial estimate of mole compositions is provided. More importantly, it helps us automatically split the unstable single phase into two phases, determine the existence of gas-oil contact (GOC) and locate its position if GOC does exist. A number of numerical examples are presented to show the performance of our method.
A Survey of Procedural Methods for Terrain Modelling
Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.
2009-01-01
Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper
Disruption Management in the Airline Industry - Concepts, Models and Methods
DEFF Research Database (Denmark)
Clausen, Jens; Larsen, Allan; Larsen, Jesper
2005-01-01
the reality faced in operations control and the decision support offered by the commercial it-systems targeting the recovery process. Though substantial achievements have been made with respect to solution methods, and hardware has become much more powerful, even the most advanced prototype systems...
Models and methods can theory meet the B physics challenge?
Nierste, U
2004-01-01
The B physics experiments of the next generation, BTeV and LHCb, will perform measurements with an unprecedented accuracy. Theory predictions must control hadronic uncertainties with the same precision to extract the desired short-distance information successfully. I argue that this is indeed possible, discuss those theoretical methods in which hadronic uncertainties are under control and list hadronically clean observables.
Model films of cellulose. I. Method development and initial results
Gunnars, S.; Wågberg, L.; Cohen Stuart, M.A.
2002-01-01
This report presents a new method for the preparation of thin cellulose films. NMMO (N- methylmorpholine- N-oxide) was used to dissolve cellulose and addition of DMSO (dimethyl sulfoxide) was used to control viscosity of the cellulose solution. A thin layer of the cellulose solution is spin- coated
Mapping research questions about translation to methods, measures, and models
Berninger, V.; Rijlaarsdam, G.; Fayol, M.L.; Fayol, M.; Alamargot, D.; Berninger, V.W.
2012-01-01
About the book: Translation of cognitive representations into written language is one of the most important processes in writing. This volume provides a long-awaited updated overview of the field. The contributors discuss each of the commonly used research methods for studying translation; theorize
Analysis and Modeling of Boundary Layer Separation Method (BLSM).
Pethő, Dóra; Horváth, Géza; Liszi, János; Tóth, Imre; Paor, Dávid
2010-09-01
Nowadays rules of environmental protection strictly regulate pollution material emission into environment. To keep the environmental protection laws recycling is one of the useful methods of waste material treatment. We have developed a new method for the treatment of industrial waste water and named it boundary layer separation method (BLSM). We apply the phenomena that ions can be enriched in the boundary layer of the electrically charged electrode surface compared to the bulk liquid phase. The main point of the method is that the boundary layer at correctly chosen movement velocity can be taken out of the waste water without being damaged, and the ion-enriched boundary layer can be recycled. Electrosorption is a surface phenomenon. It can be used with high efficiency in case of large electrochemically active surface of electrodes. During our research work two high surface area nickel electrodes have been prepared. The value of electrochemically active surface area of electrodes has been estimated. The existence of diffusion part of the double layer has been experimentally approved. The electrical double layer capacity has been determined. Ion transport by boundary layer separation has been introduced. Finally we have tried to estimate the relative significance of physical adsorption and electrosorption.
Model-murderers. Afterthoughts on the Goldhagen method and history
Lorenz, C.F.G.
2002-01-01
This article analyses the theoretical and methodological structure of Goldhagen's book Hitler's Willing Executioners. It argues that the paradoxical success of HWE can better be understood when its paradoxical implicit theory and method are understood. Although Goldhagen claims that HWE embodies an
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
A coupled DEM-CFD method for impulse wave modelling
Zhao, Tao; Utili, Stefano; Crosta, GiovanBattista
2015-04-01
Rockslides can be characterized by a rapid evolution, up to a possible transition into a rock avalanche, which can be associated with an almost instantaneous collapse and spreading. Different examples are available in the literature, but the Vajont rockslide is quite unique for its morphological and geological characteristics, as well as for the type of evolution and the availability of long term monitoring data. This study advocates the use of a DEM-CFD framework for the modelling of the generation of hydrodynamic waves due to the impact of a rapid moving rockslide or rock-debris avalanche. 3D DEM analyses in plane strain by a coupled DEM-CFD code were performed to simulate the rockslide from its onset to the impact with still water and the subsequent wave generation (Zhao et al., 2014). The physical response predicted is in broad agreement with the available observations. The numerical results are compared to those published in the literature and especially to Crosta et al. (2014). According to our results, the maximum computed run up amounts to ca. 120 m and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 m and 190 m respectively). In these simulations, the slope mass is considered permeable, such that the toe region of the slope can move submerged in the reservoir and the impulse water wave can also flow back into the slope mass. However, the upscaling of the grains size in the DEM model leads to an unrealistically high hydraulic conductivity of the model, such that only a small amount of water is splashed onto the northern bank of the Vajont valley. The use of high fluid viscosity and coarse grain model has shown the possibility to model more realistically both the slope and wave motions. However, more detailed slope and fluid properties, and the need for computational efficiency should be considered in future research work. This aspect has also been
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Do different methods of modeling statin treatment effectiveness influence the optimal decision?
B.J.H. van Kempen (Bob); B.S. Ferket (Bart); A. Hofman (Albert); S. Spronk (Sandra); E.W. Steyerberg (Ewout); M.G.M. Hunink (Myriam)
2012-01-01
textabstractPurpose. Modeling studies that evaluate statin treatment for the prevention of cardiovascular disease (CVD) use different methods to model the effect of statins. The aim of this study was to evaluate the impact of using different modeling methods on the optimal decision found in such
Using the QUAIT Model to Effectively Teach Research Methods Curriculum to Master's-Level Students
Hamilton, Nancy J.; Gitchel, Dent
2017-01-01
Purpose: To apply Slavin's model of effective instruction to teaching research methods to master's-level students. Methods: Barriers to the scientist-practitioner model (student research experience, confidence, and utility value pertaining to research methods as well as faculty research and pedagogical incompetencies) are discussed. Results: The…
Application of homotopy-perturbation method to nonlinear population dynamics models
International Nuclear Information System (INIS)
Chowdhury, M.S.H.; Hashim, I.; Abdulaziz, O.
2007-01-01
In this Letter, the homotopy-perturbation method (HPM) is employed to derive approximate series solutions of nonlinear population dynamics models. The nonlinear models considered are the multispecies Lotka-Volterra equations. The accuracy of this method is examined by comparison with the available exact and the fourth-order Runge-Kutta method (RK4)
An Application of Taylor Models to the Nakao Method on ODEs
Yamamoto, Nobito; Komori, Takashi
2009-01-01
The authors give short survey on validated computaion of initial value problems for ODEs especially Taylor model methods. Then they propose an application of Taylor models to the Nakao method which has been developed for numerical verification methods on PDEs and apply it to initial value problems for ODEs with some numerical experiments.
Comparison of model reference and map based control method for vehicle stability enhancement
Baek, S.; Son, M.; Song, J.; Boo, K.; Kim, H.
2012-01-01
A map based controller method to improve a vehicle lateral stability is proposed in this study and compared with the conventional method, a model referenced controller. A model referenced controller to determine compensated yaw moment uses the sliding mode method, but the proposed map based
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
A service based estimation method for MPSoC performance modelling
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand
2008-01-01
This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...
International Nuclear Information System (INIS)
Méchi, Rachid; Farhat, Habib; Said, Rachid
2016-01-01
Nongray radiation calculations are carried out for a case problem available in the literature. The problem is a non-isothermal and inhomogeneous CO 2 -H 2 O- N 2 gas mixture confined within an axisymmetric cylindrical furnace. The numerical procedure is based on the zonal method associated with the weighted sum of gray gases (WSGG) model. The effect of the wall emissivity on the heat flux losses is discussed. It is shown that this property affects strongly the furnace efficiency and that the most important heat fluxes are those leaving through the circumferential boundary. The numerical procedure adopted in this work is found to be effective and may be relied on to simulate coupled turbulent combustion-radiation in fired furnaces. (paper)
Autonomous guided vehicles methods and models for optimal path planning
Fazlollahtabar, Hamed
2015-01-01
This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...
Computational methods for structural load and resistance modeling
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
Model-independent determination of dissociation energies: method and applications
International Nuclear Information System (INIS)
Vogel, Manuel; Hansen, Klavs; Herlert, Alexander; Schweikhard, Lutz
2003-01-01
A number of methods are available for the purpose of extracting dissociation energies of polyatomic particles. Many of these techniques relate the rate of disintegration at a known excitation energy to the value of the dissociation energy. However, such a determination is susceptible to systematic uncertainties, mainly due to the unknown thermal properties of the particles and the potential existence of 'dark' channels, such as radiative cooling. These problems can be avoided with a recently developed procedure, which applies energy-dependent reactions of the decay products as an uncalibrated thermometer. Thus, it allows a direct measurement of dissociation energies, without any assumption on properties of the system or on details of the disintegration process. The experiments have been performed in a Penning trap, where both rate constants and branching ratios have been measured. The dissociation energies determined with different versions of the method yield identical values, within a small uncertainty
A modified Rietveld method to model highly anisotropic ceramics
International Nuclear Information System (INIS)
Tutuncu, G.; Motahari, M.; Daymond, M.R.; Ustundag, E.
2012-01-01
High energy X-ray diffraction was employed to probe the complex constitutive behavior of a polycrystalline ferroelectric material in various sample orientations. Pb(Zn,Nb)O 3 –Pb(Zr,Ti)O 3 (PZN–PZT) ceramics were subjected to a cyclic bipolar electric field while diffraction patterns were taken. Using transmission geometry and a two-dimensional detector, lattice strain and texture evolution (domain switching) were measured in multiple sample directions simultaneously. In addition, texture analysis suggests that non-180° domain switching is coupled with lattice strain evolution during uniaxial electrical loading. As a result of this material’s high strain anisotropy, the full-pattern Rietveld method was inadequate to analyze the diffraction data. Instead, a modified Rietveld method, which includes an elastic anisotropy term, yielded significant improvements in the data analysis results.
The spectral cell method in nonlinear earthquake modeling
Giraldo, Daniel; Restrepo, Doriam
2017-12-01
This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.
Comparison of operation optimization methods in energy system modelling
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2013-01-01
In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...
Modeling of Airfoil Trailing Edge Flap with Immersed Boundary Method
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2011-01-01
The present work considers incompressible flow over a 2D airfoil with a deformable trailing edge. The aerodynamic characteristics of an airfoil with a trailing edge flap is numerically investigated using computational fluid dynamics. A novel hybrid immersed boundary (IB) technique is applied...... to simulate the moving part of the trailing edge. Over the main fixed part of the airfoil the Navier-Stokes (NS) equations are solved using a standard body-fitted finite volume technique whereas the moving trailing edge flap is simulated with the immersed boundary method on a curvilinear mesh. The obtained...... results show that the hybrid approach is an efficient and accurate method for solving turbulent flows past airfoils with a trailing edge flap and flow control using trailing edge flap is an efficient way to regulate the aerodynamic loading on airfoils....
Method of modeling transmissions for real-time simulation
Hebbale, Kumaraswamy V.
2012-09-25
A transmission modeling system includes an in-gear module that determines an in-gear acceleration when a vehicle is in gear. A shift module determines a shift acceleration based on a clutch torque when the vehicle is shifting between gears. A shaft acceleration determination module determines a shaft acceleration based on at least one of the in-gear acceleration and the shift acceleration.
Stochastic model and method of zoning water networks
Тевяшев, Андрей Дмитриевич; Матвиенко, Ольга Ивановна
2014-01-01
Water consumption at different time of the day is uneven. The model of steady flow distribution in water-supply networks is calculated for maximum consumption and effectively used in the network design and reconstruction. Quasi-stationary modes, in which the parameters are random variables and vary relative to their mean values are more suitable for operational management and planning of rational network operation modes.Leaks, which sometimes exceed 50 % of the volume of water supplied, are o...
1-g model loading tests: methods and results
Czech Academy of Sciences Publication Activity Database
Feda, Jaroslav
1999-01-01
Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering
A Data Pre-Processing Model for the Topsis Method
Directory of Open Access Journals (Sweden)
Kobryń Andrzej
2016-12-01
Full Text Available TOPSIS is one of the most popular methods of multi-criteria decision making (MCDM. Its fundamental role is the establishment of chosen alternatives ranking based on their distance from the ideal and negative-ideal solution. There are three primary versions of the TOPSIS method distinguished: classical, interval and fuzzy, where calculation algorithms are adjusted to the character of input rating decision-making alternatives (real numbers, interval data or fuzzy numbers. Various, specialist publications present descriptions on the use of particular versions of the TOPSIS method in the decision-making process, particularly popular is the fuzzy version. However, it should be noticed, that depending on the character of accepted criteria – rating of alternatives can have a heterogeneous character. The present paper suggests the means of proceeding in the situation when the set of criteria covers characteristic criteria for each of the mentioned versions of TOPSIS, as a result of which the rating of the alternatives is vague. The calculation procedure has been illustrated by an adequate numerical example.
New Methods for Air Quality Model Evaluation with Satellite Data
Holloway, T.; Harkey, M.
2015-12-01
Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
Integral equation models for image restoration: high accuracy methods and fast algorithms
International Nuclear Information System (INIS)
Lu, Yao; Shen, Lixin; Xu, Yuesheng
2010-01-01
Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images
Method for quantifying logic models for safety analysis
International Nuclear Information System (INIS)
Erdmann, R.C.; Kelly, J.E.; Kirch, H.R.; Leverenz, F.L.; Rumble, E.T.
1977-01-01
The accomplishment of any detailed reliability or risk analysis task involves both engineering judgement and accurate analytical procedures. In this paper procedures are described which have been programmed so that a variety of information concerning reliability, availability, risk assessment, and cost impact can be evaluated quickly and accurately. Utilizing a common input deck the WAM codes efficiently and accurately provide information about systems modeled by any Boolean function. This information includes: (1) Point estimates of the system (top event) reliability (or unreliability) together with the reliability of any event within the system (WAM-BAM code). (2) A reevaluation of the system as described in (1) with changes made to the probability of occurrence of basic events (WAM-TAP code). (3) Qualitative assessment of the system in term of failures (cut-sets) which cause the system to fail and which cause any event within the system to occur (WAM-CUT code). (4) Qualitative assessment of the system and events within the system together with the first, and if desired, second moment of the probability of the events being analyzed. This allows modeling the basic system components as random variables with a mean and standard deviation included in the model (WAM-CUT code). (5) Qualitative assessment of the system which is displayed in terms of cut sets and the probability polynominal (WAM-CUT code). This can be stored for use by a Monte Carlo code which allows determination of the distribution of the system reliability as a function of component distributions (SPASM code). (6) A drawing of the fault tree as input to the evaluation codes (WAM-DRAW). This paper will describe the development of these codes and present example problems which illustrate the codes' capabilities
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Mathematical models and methods of localized interaction theory
Bunimovich, AI
1995-01-01
The interaction of the environment with a moving body is called "localized" if it has been found or assumed that the force or/and thermal influence of the environment on each body surface point is independent and can be determined by the local geometrical and kinematical characteristics of this point as well as by the parameters of the environment and body-environment interactions which are the same for the whole surface of contact.Such models are widespread in aerodynamics and gas dynamics, covering supersonic and hypersonic flows, and rarefied gas flows. They describe the influence of light
Efficient modeling of chiral media using SCN-TLM method
Directory of Open Access Journals (Sweden)
Yaich M.I.
2004-01-01
Full Text Available An efficient approach allowing to include linear bi-isotropic chiral materials in time-domain transmission line matrix (TLM calculations by employing recursive evaluation of the convolution of the electric and magnetic fields and susceptibility functions is presented. The new technique consists to add both voltage and current sources in supplementary stubs of the symmetrical condensed node (SCN of the TLM method. In this article, the details and the complete description of this approach are given. A comparison of the obtained numerical results with those of the literature reflects its validity and efficiency.
SO2 oxidation catalyst model systems characterized by thermal methods
DEFF Research Database (Denmark)
Hatem, G; Eriksen, Kim Michael; Gaune-Escard, M
2002-01-01
The molten salts M2S2O7 and MHSO4, the binary molten salt Systems M2S2O7-MHSO4 and the molten salt-gas systems M2S2O7 V2O5 and M2S2O7-M2SO4 V2O5 (M = Na, K, Rb, Cs) in O-2, SO2 and At atmospheres have been investigated by thermal methods like calorimetry, Differential Enthalpic Analysis (DEA) and...... to the mechanism Of SO2 oxidation by V2O5 based industrial catalysts....
Mathematic modeling of the method of measurement relative dielectric permeability
Plotnikova, I. V.; Chicherina, N. V.; Stepanov, A. B.
2018-05-01
The method of measuring relative permittivity’s and the position of the interface between layers of a liquid medium is considered in the article. An electric capacitor is a system consisting of two conductors that are separated by a dielectric layer. It is mathematically proven that at any given time it is possible to obtain the values of the relative permittivity in the layers of the liquid medium and to determine the level of the interface between the layers of the two-layer liquid. The estimation of measurement errors is made.
New proposal of moderator temperature coefficient estimation method using gray-box model in NPP, (1)
International Nuclear Information System (INIS)
Mori, Michitsugu; Kagami, Yuichi; Kanemoto, Shigeru; Enomoto, Mitsuhiro; Tamaoki, Tetsuo; Kawamura, Shinichiro
2004-01-01
The purpose of the present paper is to establish a new void reactivity coefficient (VRC) estimation method based on gray box modeling concept. The gray box model consists of a point kinetics model as the first principle model and a fitting model of moderator temperature kinetics. Applying Kalman filter and maximum likehood estimation algorithms to the gray box model, MTC can be estimated. The verification test is done by Monte Carlo simulation, and, it is shown that the present method gives the best estimation results comparing with the conventional methods from the viewpoints of non-biased and smallest scattering estimation performance. Furthermore, the method is verified via real plant data analysis. The reason of good performance of the present method is explained by proper definition of likelihood function based on explicit expression of observation and system noise in the gray box model. (author)
Review of Wind Energy Forecasting Methods for Modeling Ramping Events
Energy Technology Data Exchange (ETDEWEB)
Wharton, S; Lundquist, J K; Marjanovic, N; Williams, J L; Rhodes, M; Chow, T K; Maxwell, R
2011-03-28
Tall onshore wind turbines, with hub heights between 80 m and 100 m, can extract large amounts of energy from the atmosphere since they generally encounter higher wind speeds, but they face challenges given the complexity of boundary layer flows. This complexity of the lowest layers of the atmosphere, where wind turbines reside, has made conventional modeling efforts less than ideal. To meet the nation's goal of increasing wind power into the U.S. electrical grid, the accuracy of wind power forecasts must be improved. In this report, the Lawrence Livermore National Laboratory, in collaboration with the University of Colorado at Boulder, University of California at Berkeley, and Colorado School of Mines, evaluates innovative approaches to forecasting sudden changes in wind speed or 'ramping events' at an onshore, multimegawatt wind farm. The forecast simulations are compared to observations of wind speed and direction from tall meteorological towers and a remote-sensing Sound Detection and Ranging (SODAR) instrument. Ramping events, i.e., sudden increases or decreases in wind speed and hence, power generated by a turbine, are especially problematic for wind farm operators. Sudden changes in wind speed or direction can lead to large power generation differences across a wind farm and are very difficult to predict with current forecasting tools. Here, we quantify the ability of three models, mesoscale WRF, WRF-LES, and PF.WRF, which vary in sophistication and required user expertise, to predict three ramping events at a North American wind farm.
Modelling Ischemic Stroke and Temperature Intervention Using Vascular Porous Method
Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael
2017-11-01
In the event of cerebral infarction, a region of tissue is supplied with insufficient blood flow to support normal metabolism. This can lead to an ischemic reaction which incurs cell death. Through a reduction of temperature, the metabolic demand can be reduced, which then offsets the onset of necrosis. This allows extra time for the patient to receive medical attention and could help prevent permanent brain damage from occurring. Here, we present a vascular-porous (VaPor) blood flow model that can simulate such an event. Cerebral blood flow is simulated using a combination of 1-Dimensional vessels embedded in 3-Dimensional porous media. This allows for simple manipulation of the structure and determining the effect of an obstructed vessel. Results show regional temperature increase of 1-1.5°C comparable with results from literature (in contrast to previous simpler models). Additionally, the application of scalp cooling in such an event dramatically reduces the temperature in the affected region to near hypothermic temperatures, which points to a potential rapid form of first intervention.
A Design Method of Robust Servo Internal Model Control with Control Input Saturation
山田, 功; 舩見, 洋祐
2001-01-01
In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.
A QUADTREE ORGANIZATION CONSTRUCTION AND SCHEDULING METHOD FOR URBAN 3D MODEL BASED ON WEIGHT
C. Yao; G. Peng; Y. Song; M. Duan
2017-01-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weigh...
R and D on automatic modeling methods for Monte Carlo codes FLUKA
International Nuclear Information System (INIS)
Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng
2013-01-01
FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)
Justification of the concept of mathematical methods and models in making decisions on taxation
KORKUNA NATALIA MIKHAYLOVNA
2017-01-01
The paper presents the concept of the application of mathematical methods and models in making decisions on taxation in Ukraine as a phased process. Its performance result is the selection of an effective decision based on regression and optimization models.
Parameter-free methods distinguish Wnt pathway models and guide design of experiments
MacLean, Adam L.; Rosen, Zvi; Byrne, Helen M.; Harrington, Heather A.
2015-01-01
models can fit this time course. We appeal to algebraic methods (concepts from chemical reaction network theory and matroid theory) to analyze the models without recourse to specific parameter values. These approaches provide insight into aspects of Wnt
Vatcheva, Ivayla; Bernard, Olivier; de Jong, Hidde; Gouze, Jean-Luc; Mars, Nicolaas; Nebel, B.
2001-01-01
Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of
International Nuclear Information System (INIS)
Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.
2006-01-01
Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru
Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper
Hock, Heinrich; Isenberg, Eric
2012-01-01
Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
International Nuclear Information System (INIS)
Slifstein, Mark; Laruelle, Marc
2001-01-01
The science of quantitative analysis of PET and SPECT neuroreceptor imaging studies has grown considerably over the past decade. A number of methods have been proposed in which receptor parameter estimation results from fitting data to a model of the underlying kinetics of ligand uptake in the brain. These approaches have come to be collectively known as model-based methods and several have received widespread use. Here, we briefly review the most frequently used methods and examine their strengths and weaknesses. Kinetic modeling is the most direct implementation of the compartment models, but with some tracers accurate input function measurement and good compartment configuration identification can be difficult to obtain. Other methods were designed to overcome some particular vulnerability to error of classical kinetic modeling, but introduced new vulnerabilities in the process. Reference region methods obviate the need for arterial plasma measurement, but are not as robust to violations of the underlying modeling assumptions as methods using the arterial input function. Graphical methods give estimates of V T without the requirement of compartment model specification, but provide a biased estimator in the presence of statistical noise. True equilibrium methods are quite robust, but their use is limited to experiments with tracers that are suitable for constant infusion. In conclusion, there is no universally 'best' method that is applicable to all neuroreceptor imaging studies, and carefully evaluation of model-based methods is required for each radiotracer
A Survey On Physical Methods For Deformation Modeling
Directory of Open Access Journals (Sweden)
Huda Basloom
2015-08-01
Full Text Available Much effort has been dedicated to achieving realism in the simulation of deformable objects such as cloth hair rubber sea water smoke and human soft tissue in surgical simulation. However the deformable object in these simulations will exhibit physically correct behaviors true to the behavior of real objects when any force is applied to it and sometimes this requires real-time simulation. No matter how complex the geometry is real-time simulation is still required in some applications. Surgery simulation is an example of the need for real-time simulation. This situation has attracted the attention of a wide community of researchers such as computer scientists mechanical engineers biomechanics and computational geometers. This paper presents a review of the existing techniques for modeling deformable objects which have been developed within the last three decades for different computer graphics interactive applications.
An approximation method for diffusion based leaching models
International Nuclear Information System (INIS)
Shukla, B.S.; Dignam, M.J.
1987-01-01
In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)
Practical application of stereological methods in experimental kidney animal models.
Fernández García, María Teresa; Núñez Martínez, Paula; García de la Fuente, Vanessa; Sánchez Pitiot, Marta; Muñiz Salgueiro, María Del Carmen; Perillán Méndez, Carmen; Argüelles Luis, Juan; Astudillo González, Aurora
The kidneys are vital organs responsible for excretion, fluid and electrolyte balance and hormone production. The nephrons are the kidney's functional and structural units. The number, size and distribution of the nephron components contain relevant information on renal function. Stereology is a branch of morphometry that applies mathematical principles to obtain three-dimensional information from serial, parallel and equidistant two-dimensional microscopic sections. Because of the complexity of stereological studies and the lack of scientific literature on the subject, the aim of this paper is to clearly explain, through animal models, the basic concepts of stereology and how to calculate the main kidney stereological parameters that can be applied in future experimental studies. Copyright © 2016 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.
Models and methods for building web recommendation systems
Stekh, Yu.; Artsibasov, V.
2012-01-01
Modern Word Wide Web contains a large number of Web sites and pages in each Web site. Web recommendation system (recommendation system for web pages) are typically implemented on web servers and use the data obtained from the collection viewed web templates (implicit data) or user registration data (explicit data). In article considering methods and algorithms of web recommendation system based on the technology of data mining (web mining). Сучасна мережа Інтернет містить велику кількість веб...
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method
Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun
2017-10-01
Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Optimisation models and solution methods for load management
Energy Technology Data Exchange (ETDEWEB)
Gustafsson, Stig-Inge [Linkoeping Univ. (Sweden). Div. of Wood Science and Technology; Roennqvist, Mikael; Claesson, Marcus [Linkoeping Univ. (Sweden). Div. of Optimisation
2001-02-01
The electricity market in Sweden has changed during recent years. Electricity for industrial use can nowadays be purchased from a number of competing electricity suppliers. Hence, the price for each kilowatt-hour is significantly lower than just two years ago and the interest for electricity conservation measures has declined. Part of the electricity tariff is, however, almost the same as before, i.e. the demand cost expressed in Swedish Kronor, SEK, for each kilowatt. This has put focus on load management measures in order to decrease this specific cost. Saving one kWh might lead to monetary savings between 0.22 to 914 SEK and this paper shows how to save only those kWh which really save money. A load management system has been installed in a small carpentry factory and the device can turn off equipment due to a certain priority and for a number of minutes each hour. The question is now, what level on the electricity load is optimal in a strict mathematical sense, i.e. how many kW should be set in the load management computer in order to get the best profitability? In this paper we develop a mathematical model which can be used both as a tool to find a best profitable subscription level and as a tool to control the turn of choices. Numerical results from a case study are presented.
Optimisation models and solution methods for load management
International Nuclear Information System (INIS)
Gustafsson, Stig-Inge; Roennqvist, Mikael; Claesson, Marcus
2001-02-01
The electricity market in Sweden has changed during recent years. Electricity for industrial use can nowadays be purchased from a number of competing electricity suppliers. Hence, the price for each kilowatt-hour is significantly lower than just two years ago and the interest for electricity conservation measures has declined. Part of the electricity tariff is, however, almost the same as before, i.e. the demand cost expressed in Swedish Kronor, SEK, for each kilowatt. This has put focus on load management measures in order to decrease this specific cost. Saving one kWh might lead to monetary savings between 0.22 to 914 SEK and this paper shows how to save only those kWh which really save money. A load management system has been installed in a small carpentry factory and the device can turn off equipment due to a certain priority and for a number of minutes each hour. The question is now, what level on the electricity load is optimal in a strict mathematical sense, i.e. how many kW should be set in the load management computer in order to get the best profitability? In this paper we develop a mathematical model which can be used both as a tool to find a best profitable subscription level and as a tool to control the turn of choices. Numerical results from a case study are presented
Methane Feedback on Atmospheric Chemistry: Methods, Models, and Mechanisms
Holmes, Christopher D.
2018-04-01
The atmospheric methane (CH4) chemical feedback is a key process for understanding the behavior of atmospheric CH4 and its environmental impact. This work reviews how the feedback is defined and used, then examines the meteorological, chemical, and emission factors that control the feedback strength. Geographical and temporal variations in the feedback are described and explained by HOx (HOx = OH + HO2) production and partitioning. Different CH4 boundary conditions used by models, however, make no meaningful difference to the feedback calculation. The strength of the CH4 feedback depends on atmospheric composition, particularly the atmospheric CH4 burden, and is therefore not constant. Sensitivity tests show that the feedback depends very weakly on temperature, insolation, water vapor, and emissions of NO. While the feedback strength has likely remained within 10% of its present value over the industrial era and likely will over the twenty-first century, neglecting these changes biases our understanding of CH4 impacts. Most environmental consequences per kg of CH4 emissions, including its global warming potential (GWP), scale with the perturbation time, which may have grown as much as 40% over the industrial era and continues to rise.