Geometric simplification of analysis models
Energy Technology Data Exchange (ETDEWEB)
Watterberg, P.A.
1999-12-01
Analysis programs have been having to deal with more and more complex objects as the capability to model fine detail increases. This can make them unacceptably slow. This project attempts to find heuristics for removing features from models in an automatic fashion in order to reduce polygon count. The approach is not one of theoretical completeness but rather one of trying to achieve useful results with scattered practical ideas. By removing a few simple things such as screw holes, slots, chambers, and fillets, large gains can be realized. Results varied but a reduction in the number of polygons by a factor of 10 is not unusual.
An Integrated Simplification Approach for 3D Buildings with Sloped and Flat Roofs
Directory of Open Access Journals (Sweden)
Jinghan Xie
2016-07-01
Full Text Available Simplification of three-dimensional (3D buildings is critical to improve the efficiency of visualizing urban environments while ensuring realistic urban scenes. Moreover, it underpins the construction of multi-scale 3D city models (3DCMs which could be applied to study various urban issues. In this paper, we design a generic yet effective approach for simplifying 3D buildings. Instead of relying on both semantic information and geometric information, our approach is based solely on geometric information as many 3D buildings still do not include semantic information. In addition, it provides an integrated means to treat 3D buildings with either sloped or flat roofs. The two case studies, one exploring simplification of individual 3D buildings at varying levels of complexity while the other, investigating the multi-scale simplification of a cityscape, show the effectiveness of our approach.
Electric Power Distribution System Model Simplification Using Segment Substitution
Energy Technology Data Exchange (ETDEWEB)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.
2018-05-01
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.
Error Analysis of Some Demand Simplifications in Hydraulic Models of Water Supply Networks
Directory of Open Access Journals (Sweden)
Joaquín Izquierdo
2013-01-01
Full Text Available Mathematical modeling of water distribution networks makes use of simplifications aimed to optimize the development and use of the mathematical models involved. Simplified models are used systematically by water utilities, frequently with no awareness of the implications of the assumptions used. Some simplifications are derived from the various levels of granularity at which a network can be considered. This is the case of some demand simplifications, specifically, when consumptions associated with a line are equally allocated to the ends of the line. In this paper, we present examples of situations where this kind of simplification produces models that are very unrealistic. We also identify the main variables responsible for the errors. By performing some error analysis, we assess to what extent such a simplification is valid. Using this information, guidelines are provided that enable the user to establish if a given simplification is acceptable or, on the contrary, supplies information that differs substantially from reality. We also develop easy to implement formulae that enable the allocation of inner line demand to the line ends with minimal error; finally, we assess the errors associated with the simplification and locate the points of a line where maximum discrepancies occur.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
DEFF Research Database (Denmark)
Davidsen, Steffen; Löwe, Roland; Thrysøe, Cecilie
2017-01-01
Evaluation of pluvial flood risk is often based on computations using 1D/2D urban flood models. However, guidelines on choice of model complexity are missing, especially for one-dimensional (1D) network models. This study presents a new automatic approach for simplification of 1D hydraulic networks...... nodes eliminated connection to some areas. This promoted errors in two-dimensional (2D) flood results with changes in spatial location of flooding in the reduced 1D/2D models. Applying delayed rain inputs to compensate for changes in travel time and preserving network volume by expanding node diameters...... did not improve overall results. Investigations on the expected annual damages (EAD) showed that differences in EAD are smaller than deviations in the simulated flooded areas, suggesting that spatial changes are limited to local displacements. Probably, minor improvements of the simplification...
DEFF Research Database (Denmark)
Davidsen, Steffen; Löwe, Roland; Thrysøe, Cecilie
2017-01-01
Evaluation of pluvial flood risk is often based on computations using 1D/2D urban flood models. However, guidelines on choice of model complexity are missing, especially for one-dimensional (1D) network models. This study presents a new automatic approach for simplification of 1D hydraulic networks...... did not improve overall results. Investigations on the expected annual damages (EAD) showed that differences in EAD are smaller than deviations in the simulated flooded areas, suggesting that spatial changes are limited to local displacements. Probably, minor improvements of the simplification...... nodes eliminated connection to some areas. This promoted errors in two-dimensional (2D) flood results with changes in spatial location of flooding in the reduced 1D/2D models. Applying delayed rain inputs to compensate for changes in travel time and preserving network volume by expanding node diameters...
DEFF Research Database (Denmark)
Löwe, Roland; Davidsen, Steffen; Thrysøe, Cecilie
the 1D network model. The simplifications lead to an underestimation of flooded area because interaction points between network and surface are removed and because water is transported downstream faster. These effects can be mitigated by maintaining nodes in flood-prone areas in the simplification......We present an algorithm for automated simplification of 1D pipe network models. The impact of the simplifications on the flooding simulated by coupled 1D-2D models is evaluated in an Australian case study. Significant reductions of the simulation time of the coupled model are achieved by reducing...
Simplification of camera models without loss of precision
Shang, Yang; Li, You; He, Yan; Wang, Weihua; Yu, Qifeng
2007-12-01
Camera parameters' redundancy and actions on imaging process are analyzed based on central perspective projection model with nonlinear lens distortion. By assigning some parameters' values or their relations in advance, seven kinds of simplified camera models are presented. The simplified models' availability is validated by simulated data and engineering applications. By using the simplified camera models, methods and arithmetics of videogrammetry can be simplified without precision losses. The calculation becomes faster and stabler. The solving condition requirements are reduced. These characteristics make the precision-reserved simplified camera models availible for engineering applications.
Towards simplification of hydrologic modeling: identification of dominant processes
Directory of Open Access Journals (Sweden)
S. L. Markstrom
2016-11-01
Full Text Available parameter hydrologic model, has been applied to the conterminous US (CONUS. Parameter sensitivity analysis was used to identify: (1 the sensitive input parameters and (2 particular model output variables that could be associated with the dominant hydrologic process(es. Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff and model performance statistic (mean, coefficient of variation, and autoregressive lag 1. Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1 the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2 the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3 different processes require different numbers of parameters for simulation, and (4 some sensitive parameters influence only one hydrologic process, while others may influence many.
The Effect of Head Model Simplification on Beamformer Source Localization
Directory of Open Access Journals (Sweden)
Frank Neugebauer
2017-11-01
Full Text Available Beamformers are a widely-used tool in brain analysis with magnetoencephalography (MEG and electroencephalography (EEG. For the construction of the beamformer filters realistic head volume conductor modeling is necessary for accurately computing the EEG and MEG leadfields, i.e., for solving the EEG and MEG forward problem. In this work, we investigate the influence of including realistic head tissue compartments into a finite element method (FEM model on the beamformer's localization ability. Specifically, we investigate the effect of including cerebrospinal fluid, gray matter, and white matter distinction, as well as segmenting the skull bone into compacta and spongiosa, and modeling white matter anisotropy. We simulate an interictal epileptic measurement with white sensor noise. Beamformer filters are constructed with unit gain, unit array gain, and unit noise gain constraint. Beamformer source positions are determined by evaluating power and excess sample kurtosis (g2 of the source-waveforms at all source space nodes. For both modalities, we see a strong effect of modeling the cerebrospinal fluid and white and gray matter. Depending on the source position, both effects can each be in the magnitude of centimeters, rendering their modeling necessary for successful localization. Precise skull modeling mainly effected the EEG up to a few millimeters, while both modalities could profit from modeling white matter anisotropy to a smaller extent of 5–10 mm. The unit noise gain or neural activity index beamformer behaves similarly to the array gain beamformer when noise strength is sufficiently high. Variance localization seems more robust against modeling errors than kurtosis.
International Nuclear Information System (INIS)
Ericsson, Lars O.; Holmen, Johan
2010-12-01
The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed
Development of a CAD Model Simplification Framework for Finite Element Analysis
2012-01-01
homotopy preserving medial axis transform by Sud et al. [38], and an extension on the common MAT method known as Θ-MAT for poly- hedral mesh models by...enhancement, ergonomic aspect, or part attach- ment method ; can facilitate the classification of the part’s criticality for analysis . The PART NAME and...ABSTRACT Title of thesis: DEVELOPMENT OF A CAD MODEL SIMPLIFICATION FRAMEWORK FOR FINITE ELEMENT ANALYSIS Brian Henry Russ, Master of Science, 2012
Directory of Open Access Journals (Sweden)
Yilang Shen
2018-01-01
Full Text Available Line simplification is an important component of map generalization. In recent years, algorithms for line simplification have been widely researched, and most of them are based on vector data. However, with the increasing development of computer vision, analysing and processing information from unstructured image data is both meaningful and challenging. Therefore, in this paper, we present a new line simplification approach based on image processing (BIP, which is specifically designed for raster data. First, the key corner points on a multi-scale image feature are detected and treated as candidate points. Then, to capture the essence of the shape within a given boundary using the fewest possible segments, the minimum-perimeter polygon (MPP is calculated and the points of the MPP are defined as the approximate feature points. Finally, the points after simplification are selected from the candidate points by comparing the distances between the candidate points and the approximate feature points. An empirical example was used to test the applicability of the proposed method. The results showed that (1 when the key corner points are detected based on a multi-scale image feature, the local features of the line can be extracted and retained and the positional accuracy of the proposed method can be maintained well; and (2 by defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits.
DEFF Research Database (Denmark)
Andersen, Søren Bøgh; Santos, Ilmar F.; Fuerst, Axel
2012-01-01
on the calculated forces and torque coming from the drive where each simplification made is described and justified. To further reduce the evaluation time, it is examined how coarse the mesh can be, while still predicting the results with a high accuracy. From this investigation, it is shown that there are certain...... interactions. This multiphysics model will later on be used for simulating and parameter optimization of a gearless mill drive with the use of Evolution Strategies which necessitates the reduction in computation time. What has been investigated is how model simplifications influence the accuracy...
Hybrid stochastic simplifications for multiscale gene networks
Directory of Open Access Journals (Sweden)
Debussche Arnaud
2009-09-01
Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?
Sprintsin, M.; Chen, J. M.
2009-05-01
The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit
Schmalzl, JöRg; Loddoch, Alexander
2003-09-01
We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.
A model to predict element redistribution in unsaturated soil: Its simplification and validation
International Nuclear Information System (INIS)
Sheppard, M.I.; Stephens, M.E.; Davis, P.A.; Wojciechowski, L.
1991-01-01
A research model has been developed to predict the long-term fate of contaminants entering unsaturated soil at the surface through irrigation or atmospheric deposition, and/or at the water table through groundwater. The model, called SCEMR1 (Soil Chemical Exchange and Migration of Radionuclides, Version 1), uses Darcy's law to model water movement, and the soil solid/liquid partition coefficient, K d , to model chemical exchange. SCEMR1 has been validated extensively on controlled field experiments with several soils, aeration statuses and the effects of plants. These validation results show that the model is robust and performs well. Sensitivity analyses identified soil K d , annual effective precipitation, soil type and soil depth to be the four most important model parameters. SCEMR1 consumes too much computer time for incorporation into a probabilistic assessment code. Therefore, we have used SCEMR1 output to derive a simple assessment model. The assessment model reflects the complexity of its parent code, and provides a more realistic description of containment transport in soils than would a compartment model. Comparison of the performance of the SCEMR1 research model, the simple SCEMR1 assessment model and the TERRA compartment model on a four-year soil-core experiment shows that the SCEMR1 assessment model generally provides conservative soil concentrations. (15 refs., 3 figs.)
Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid
Reniers, Dennie; Telea, Alexandru
2005-01-01
We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large
Extreme simplification and rendering of point sets using algebraic multigrid
Reniers, Dennie; Telea, Alexandru
2009-01-01
We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one
International Nuclear Information System (INIS)
Zobor, E.
1978-12-01
The approach chosen is based on the hierarchical control systems theory, however, the fundamentals of other approaches such as the systems simplification and systems partitioning are briefly summarized for introducing the problems associated with the control of large scale systems. The concept of a hierarchical control system acting in broad variety of operating conditions is developed and some practical extensions to the hierarchical control system approach e.g. subsystems measured and controlled with different rates, control of the partial state vector, coordination for autoregressive models etc. are given. Throughout the work the WWR-SM research reactor of the Institute has been taken as a guiding example and simple methods for the identification of the model parameters from a reactor start-up are discussed. Using the PROHYS digital simulation program elaborated in the course of the present research, detailed simulation studies were carried out for investigating the performance of a control system based on the concept and algorithms developed. In order to give a real application evidence, a short description is finally given about the closed-loop computer control system installed - in the framework of a project supported by the Hungarian State Office for Technical Development - at the WWR-SM research reactor where the results obtained in the present IAEA Research Contract were successfully applied and furnished the expected high performance
Directory of Open Access Journals (Sweden)
Lei Tang
2012-09-01
Full Text Available This paper describes some details and procedural steps in the equivalent resistance (E-R method for simplifying the pier group of the Sutong Bridge, which is located on the tidal reach of the lower Yangtze River, in Jiangsu Province. Using a two-dimensional tidal current numerical model, three different models were established: the non-bridge pier model, original bridge pier model, and simplified bridge pier model. The difference in hydrodynamic parameters, including water level, velocity, and diversion ratio, as well as time efficiency between these three models is discussed in detail. The results show that simplifying the pier group using the E-R method influences the water level and velocity near the piers, but has no influence on the diversion ratio of each cross-section of the Xuliujing reach located in the lower Yangtze River. Furthermore, the simplified bridge pier model takes half the calculation time that the original bridge pier model needs. Thus, it is concluded that the E-R method can be use to simplify bridge piers in tidal river section modeling reasonably and efficiently.
DEFF Research Database (Denmark)
Cerda Varela, Alejandro Javier; Fillon, Michel; Santos, Ilmar
2012-01-01
The relevance of calculating accurately the oil film temperature build up when modeling tilting-pad journal bearings is well established within the literature on the subject. This work studies the feasibility of using a thermal model for the tilting-pad journal bearing which includes a simplified...
Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.
2018-02-01
A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.
Xing, Xuguang; Ma, Xiaoyi
2018-01-01
The maximum upward flux (E max) is a control condition for the development of groundwater evaporation models, which can be predicted through the Gardner model. A high-precision E max prediction helps to improve irrigation practice. When using the Gardner model, it has widely been accepted to ignore parameter b (a soil-water constant) for model simplification. However, this may affect the prediction accuracy; therefore, how parameter b affects E max requires detailed investigation. An indoor one-dimensional soil-column evaporation experiment was conducted to observe E max in the presence of a water table of depth 50 cm. The study consisted of 13 treatments based on four solutes and three concentrations in groundwater: KCl, NaCl, CaCl2, and MgCl2, with concentrations of 5, 30, and 100 g/L (salty groundwater); distilled water was used as a control treatment. Results indicated that for the experimental homogeneous loam, the average E max for the treatments supplied by salty groundwater was larger than that supplied by distilled water. Furthermore, during the prediction of the Gardner-model-based E max, ignoring b and including b always led to an overestimate and underestimate, respectively, compared to the observed E max. However, the maximum upward flux calculated including b (i.e. E bmax) had higher accuracy than that ignoring b for E max prediction. Moreover, the impact of ignoring b on E max gradually weakened with increasing b value. This research helps to reveal the groundwater evaporation mechanism.
Impediments to predicting site response: Seismic property estimation and modeling simplifications
Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Guzina, B.B.
2009-01-01
We compare estimates of the empirical transfer function (ETF) to the plane SH-wave theoretical transfer function (TTF) within a laterally constant medium for invasive and noninvasive estimates of the seismic shear-wave slownesses at 13 Kiban-Kyoshin network stations throughout Japan. The difference between the ETF and either of the TTFs is substantially larger than the difference between the two TTFs computed from different estimates of the seismic properties. We show that the plane SH-wave TTF through a laterally homogeneous medium at vertical incidence inadequately models observed amplifications at most sites for both slowness estimates, obtained via downhole measurements and the spectral analysis of surface waves. Strategies to improve the predictions can be separated into two broad categories: improving the measurement of soil properties and improving the theory that maps the 1D soil profile onto spectral amplification. Using an example site where the 1D plane SH-wave formulation poorly predicts the ETF, we find a more satisfactory fit to the ETF by modeling the full wavefield and incorporating spatially correlated variability of the seismic properties. We conclude that our ability to model the observed site response transfer function is limited largely by the assumptions of the theoretical formulation rather than the uncertainty of the soil property estimates.
DEFF Research Database (Denmark)
Andersen, Morten Thøtt; Hindhede, Dennis; Lauridsen, Jimmy
2015-01-01
As offshore wind turbines move towards deeper and more distant sites, the concept of floating foundations is a potential technically and economically attractive alternative to the traditional fixed foundations. Unlike the well-studied monopile, the geometry of a floating foundation is complex and......, thereby, increases the difficulty in wave force determination due to limitations of the commonly used simplified methods. This paper deals with a physical model test of the hydrodynamic excitation force in surge on a fixed three-columned structure intended as a floating foundation for offshore wind...
Energy Technology Data Exchange (ETDEWEB)
Ericsson, Lars O. (Lars O. Ericsson Consulting AB (Sweden)); Holmen, Johan (Golder Associates (Sweden))
2010-12-15
The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed
Fossilization as Simplification?
Selinker, Larry
This article examines the phenomenon of fossilization in second language (SL) learning and instruction, discussing this process as a form of simplification. Fossilization occurs when particular linguistic forms become permanently established in the interlanguage of SL learners in a form that is deviant from the target language norm and that…
Extreme simplification and rendering of point sets using algebraic multigrid
Reniers, Dennie; Telea, Alexandru
2009-01-01
We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large number of primitives is needed to approximate even moderately simple shapes. However, often one needs to render a simplified version of the model using only a few primitives, thus to trade accurac...
Directory of Open Access Journals (Sweden)
Pol Coppin
2009-11-01
Full Text Available Multilayer canopy representations are the most common structural stand representations due to their simplicity. Implementation of recent advances in technology has allowed scientists to simulate geometrically explicit forest canopies. The effect of simplified representations of tree architecture (i.e., multilayer representations of four Fagus sylvatica (L. stands, each with different LAI, on the light absorption estimates was assessed in comparison with explicit 3D geometrical stands. The absorbed photosynthetic radiation at stand level was calculated. Subsequently, each geometrically explicit 3D stand was compared with three multilayer models representing horizontal, uniform, and planophile leaf angle distributions. The 3D stands were created either by in situ measured trees or by modelled trees generated with the AMAP plant growth software. The Physically Based Ray Tracer (PBRT algorithm was used to simulate the irradiance absorbance of the detailed 3D architecture stands, while for the three multilayer representations, the probability of light interception was simulated by applying the Beer-Lambert’s law. The irradiance inside the canopies was characterized as direct, diffuse and scattered irradiance. The irradiance absorbance of the stands was computed during eight angular sun configurations ranging from 10° (near nadir up to 80° sun zenith angles. Furthermore, a leaf stratification (the number and angular distribution of leaves per LAI layer inside a canopy analysis between the 3D stands and the multilayer representations was performed, indicating the amount of irradiance each leaf is absorbing along with the percentage of sunny and shadow leaves inside the canopy. The results reveal that a multilayer representation of a stand, using a multilayer modelling approach, greatly overestimated the absorbed irradiance in an open canopy, while it provided a better approximation in the case of a closed canopy. Moreover, the actual stratification
Robust simplifications of multiscale biochemical networks
Directory of Open Access Journals (Sweden)
Zinovyev Andrei
2008-10-01
Full Text Available Abstract Background Cellular processes such as metabolism, decision making in development and differentiation, signalling, etc., can be modeled as large networks of biochemical reactions. In order to understand the functioning of these systems, there is a strong need for general model reduction techniques allowing to simplify models without loosing their main properties. In systems biology we also need to compare models or to couple them as parts of larger models. In these situations reduction to a common level of complexity is needed. Results We propose a systematic treatment of model reduction of multiscale biochemical networks. First, we consider linear kinetic models, which appear as "pseudo-monomolecular" subsystems of multiscale nonlinear reaction networks. For such linear models, we propose a reduction algorithm which is based on a generalized theory of the limiting step that we have developed in 1. Second, for non-linear systems we develop an algorithm based on dominant solutions of quasi-stationarity equations. For oscillating systems, quasi-stationarity and averaging are combined to eliminate time scales much faster and much slower than the period of the oscillations. In all cases, we obtain robust simplifications and also identify the critical parameters of the model. The methods are demonstrated for simple examples and for a more complex model of NF-κB pathway. Conclusion Our approach allows critical parameter identification and produces hierarchies of models. Hierarchical modeling is important in "middle-out" approaches when there is need to zoom in and out several levels of complexity. Critical parameter identification is an important issue in systems biology with potential applications to biological control and therapeutics. Our approach also deals naturally with the presence of multiple time scales, which is a general property of systems biology models.
Meng, Qingen; Liu, Feng; Fisher, John; Jin, Zhongmin
2013-05-01
It is important to study the lubrication mechanism of metal-on-metal hip resurfacing prosthesis in order to understand its overall tribological performance, thereby minimize the wear particles. Previous elastohydrodynamic lubrication studies of metal-on-metal hip resurfacing prosthesis neglected the effects of the orientations of the cup and head. Simplified pelvic and femoral bone models were also adopted for the previous studies. These simplifications may lead to unrealistic predictions. For the first time, an elastohydrodynamic lubrication model was developed and solved for a full metal-on-metal hip resurfacing arthroplasty. The effects of the orientations of components and the realistic bones on the lubrication performance of metal-on-metal hip resurfacing prosthesis were investigated by comparing the full model with simplified models. It was found that the orientation of the head played a very important role in the prediction of pressure distributions and film profiles of the metal-on-metal hip resurfacing prosthesis. The inclination of the hemispherical cup up to 45° had no appreciable effect on the lubrication performance of the metal-on-metal hip resurfacing prosthesis. Moreover, the combined effect of material properties and structures of bones was negligible. Future studies should focus on higher inclination angles, smaller coverage angle and microseparation related to the occurrences of edge loading.
Simplification: Theory and Application. Anthology Series 31.
Tickoo, M. L.
This collection of 14 articles look at the issues in theory and application that arise in the use of simplification in language pedagogy. Articles include the following: (1) "Simplification in Pedagogy" (Christopher Brumfit); (2) "Simplification" (H. V. George); (3) "Fossilization as Simplification?" (Larry Selinker);…
Directory of Open Access Journals (Sweden)
Ekaterina Auer
2017-12-01
Full Text Available In this paper, we take a look at the analysis and parameter identification for control-oriented, dynamic models for the thermal subsystem of solid oxide fuel cells (SOFC from the systematized point of view of verification and validation (V&V. First, we give a possible classification of models according to their verification degree which depends, for example, on the kind of arithmetic used for both formulation and simulation. Typical SOFC models, consisting of several coupled differential equations for gas preheaters and the temperature distribution in the stack module, do not have analytical solutions because of spatial nonlinearity. Therefore, in the next part of the paper, we describe in detail two possible ways to simplify such models so that the underlying differential equations can be solved analytically while still being sufficiently accurate to serve as the basis for control synthesis. The simplifying assumption is to approximate the heat capacities of the gases by zero-order polynomials (or first-oder polynomials, respectively in the temperature. In the last, application-oriented part of the paper, we identify the parameters of these models as well as compare their performance and their ability to reflect the reality with the corresponding characteristics of models in which the heat capacities are represented by quadratic polynomials (the usual case. For this purpose, the framework UniVerMeC (Unified Framework for Verified GeoMetric Computations is used, which allows us to employ different kinds of arithmetics including the interval one. This latter possibility ensures a high level of reliability of simulations and of the subsequent validation. Besides, it helps to take into account bounded uncertainty in measurements.
International Nuclear Information System (INIS)
Schuster, R.; Schuster, S.; Holzhuetter, H.-G.
1992-01-01
A method for simplifying the mathematical models describing the dynamics of tracers (e.g. 13 C, 31 P, 14 C, as used in NMR studies or radioactive tracer experiments) in (bio-)chemical reaction systems is presented. This method is appropriate in the cases where the system includes reactions, the rates of which differ by several orders of magnitude. The basic idea is to adapt the rapid-equilibrium approximation to tracer systems. It is shown with the aid of the Perron-Frobenius theorem that for tracer systems, the conditions for applicability of this approximation are satisfied whenever some reactions are near equilibrium. It turns out that the specific enrichments of all of the labelled atoms that are connected by fast reversible reactions can be grouped together as 'pool variables'. The reduced system contains fewer parameters and can, thus, be fitted more easily to experimental data. Moreover, the method can be employed for identifying non-equilibrium and near-equilibrium reactions from experimentally measured specific enrichments of tracer. The reduction algorithm is illustrated by studying a model of the distribution of 13 C-tracers in the pentose phosphate pathway. (author)
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...
Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang
2017-05-01
Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.
Tsallis Entropy for Geometry Simplification
Directory of Open Access Journals (Sweden)
Miguel Chover
2011-09-01
Full Text Available This paper presents a study and a comparison of the use of different information-theoretic measures for polygonal mesh simplification. Generalized measures from Information Theory such as Havrda–Charvát–Tsallis entropy and mutual information have been applied. These measures have been used in the error metric of a surfaces implification algorithm. We demonstrate that these measures are useful for simplifying three-dimensional polygonal meshes. We have also compared these metrics with the error metrics used in a geometry-based method and in an image-driven method. Quantitative results are presented in the comparison using the root-mean-square error (RMSE.
2D Vector Field Simplification Based on Robustness
Skraba, Primoz
2014-03-01
Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.
Meehan, Timothy D.; Gratton, Claudio
2015-11-01
During 2007, counties across the Midwestern US with relatively high levels of landscape simplification (i.e., widespread replacement of seminatural habitats with cultivated crops) had relatively high crop-pest abundances which, in turn, were associated with relatively high insecticide application. These results suggested a positive relationship between landscape simplification and insecticide use, mediated by landscape effects on crop pests or their natural enemies. A follow-up study, in the same region but using different statistical methods, explored the relationship between landscape simplification and insecticide use between 1987 and 2007, and concluded that the relationship varied substantially in sign and strength across years. Here, we explore this relationship from 1997 through 2012, using a single dataset and two different analytical approaches. We demonstrate that, when using ordinary least squares (OLS) regression, the relationship between landscape simplification and insecticide use is, indeed, quite variable over time. However, the residuals from OLS models show strong spatial autocorrelation, indicating spatial structure in the data not accounted for by explanatory variables, and violating a standard assumption of OLS. When modeled using spatial regression techniques, relationships between landscape simplification and insecticide use were consistently positive between 1997 and 2012, and model fits were dramatically improved. We argue that spatial regression methods are more appropriate for these data, and conclude that there remains compelling correlative support for a link between landscape simplification and insecticide use in the Midwestern US. We discuss the limitations of inference from this and related studies, and suggest improved data collection campaigns for better understanding links between landscape structure, crop-pest pressure, and pest-management practices.
Equivalent Simplification Method of Micro-Grid
Cai Changchun; Cao Xiangqin
2013-01-01
The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...
On Simplification of Database Integrity Constraints
DEFF Research Database (Denmark)
Christiansen, Henning; Martinenghi, Davide
2006-01-01
Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact......, and the present paper is an attempt to fill this gap. On the theoretical side, a general characterization is introduced of the problem of simplification of integrity constraints and a natural definition is given of what it means for a simplification procedure to be ideal. We prove that ideality of simplification...
Simplification of integrity constraints for data integration
DEFF Research Database (Denmark)
Christiansen, Henning; Martinenghi, Davide
2004-01-01
When two or more databases are combined into a global one, integrity may be violated even when each database is consistent with its own local integrity constraints. Efficient methods for checking global integrity in data integration systems are called for: answers to queries can then be trusted......, because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...... together with given a priori constraints on the combination, so that only a minimal number of tuples needs to be considered. Combination from scratch, integration of a new source, and absorption of local updates are dealt with for both the local-as-view and global-as-view approaches to data integration....
WORK SIMPLIFICATION FOR PRODUCTIVITY IMPROVEMENT A ...
African Journals Online (AJOL)
current technological findings, the state of the art in work simplification concepts, theories, techniques and tools in general, its application and results of implementation of the techniques in the Metal. Industries like the Kaliti Metal Products Factory. PRODUCTIVITY IMPROVEMENT & WORK. SIMPLIFICATION CONCEPTS.
On Simplification of Database Integrity Constraints
DEFF Research Database (Denmark)
Christiansen, Henning; Martinenghi, Davide
2006-01-01
Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact,...
Complexity and simplification in understanding recruitment in benthic populations
Pineda, Jesús
2008-11-13
Research of complex systems and problems, entities with many dependencies, is often reductionist. The reductionist approach splits systems or problems into different components, and then addresses these components one by one. This approach has been used in the study of recruitment and population dynamics of marine benthic (bottom-dwelling) species. Another approach examines benthic population dynamics by looking at a small set of processes. This approach is statistical or model-oriented. Simplified approaches identify "macroecological" patterns or attempt to identify and model the essential, "first-order" elements of the system. The complexity of the recruitment and population dynamics problems stems from the number of processes that can potentially influence benthic populations, including (1) larval pool dynamics, (2) larval transport, (3) settlement, and (4) post-settlement biotic and abiotic processes, and larval production. Moreover, these processes are non-linear, some interact, and they may operate on disparate scales. This contribution discusses reductionist and simplified approaches to study benthic recruitment and population dynamics of bottom-dwelling marine invertebrates. We first address complexity in two processes known to influence recruitment, larval transport, and post-settlement survival to reproduction, and discuss the difficulty in understanding recruitment by looking at relevant processes individually and in isolation. We then address the simplified approach, which reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often results in mistaken inferences, and processes and parameters used in some models can bias our view of processes influencing recruitment. We conclude with a discussion on how to reconcile complex and simplified approaches. Although it
Minimum Entropy Rate Simplification of Stochastic Processes.
Henter, Gustav Eje; Kleijn, W Bastiaan
2016-02-23
This document contains supplemental material for the IEEE Transactions on Pattern Analysis and Machine Intelligence article "Minimum Entropy Rate Simplification of Stochastic Processes." The supplement is divided into three appen- dices: the first on MERS for Gaussian processes, and the remaining two on, respectively, the theory and the experimental results of MERS for Markov chains.
Identification of Nonlinear Systems: Volterra Series Simplification
Directory of Open Access Journals (Sweden)
A. Novák
2007-01-01
Full Text Available Traditional measurement of multimedia systems, e.g. linear impulse response and transfer function, are sufficient but not faultless. For these methods the pure linear system is considered and nonlinearities, which are usually included in real systems, are disregarded. One of the ways to describe and analyze a nonlinear system is by using Volterra Series representation. However, this representation uses an enormous number of coefficients. In this work a simplification of this method is proposed and an experiment with an audio amplifier is shown.
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1992-07-01
This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered
Modeling prosody: Different approaches
Carmichael, Lesley M.
2002-11-01
Prosody pervades all aspects of a speech signal, both in terms of raw acoustic outcomes and linguistically meaningful units, from the phoneme to the discourse unit. It is carried in the suprasegmental features of fundamental frequency, loudness, and duration. Several models have been developed to account for the way prosody organizes speech, and they vary widely in terms of their theoretical assumptions, organizational primitives, actual procedures of application to speech, and intended use (e.g., to generate speech from text vs. to model the prosodic phonology of a language). In many cases, these models overtly contradict one another with regard to their fundamental premises or their identification of the perceptible objects of linguistic prosody. These competing models are directly compared. Each model is applied to the same speech samples. This parallel analysis allows for a critical inspection of each model and its efficacy in assessing the suprasegmental behavior of the speech. The analyses illustrate how different approaches are better equipped to account for different aspects of prosody. Viewing the models and their successes from an objective perspective allows for creative possibilities in terms of combining strengths from models which might otherwise be considered fundamentally incompatible.
Structural simplification of chemical reaction networks in partial steady states.
Madelaine, Guillaume; Lhoussaine, Cédric; Niehren, Joachim; Tonello, Elisa
2016-11-01
We study the structural simplification of chemical reaction networks with partial steady state semantics assuming that the concentrations of some but not all species are constant. We present a simplification rule that can eliminate intermediate species that are in partial steady state, while preserving the dynamics of all other species. Our simplification rule can be applied to general reaction networks with some but few restrictions on the possible kinetic laws. We can also simplify reaction networks subject to conservation laws. We prove that our simplification rule is correct when applied to a module of a reaction network, as long as the partial steady state is assumed with respect to the complete network. Michaelis-Menten's simplification rule for enzymatic reactions falls out as a special case. We have implemented an algorithm that applies our simplification rules repeatedly and applied it to reaction networks from systems biology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Sutural simplification in Physodoceratinae (Aspidoceratidae, Ammonitina
Directory of Open Access Journals (Sweden)
Checa, A.
1987-08-01
Full Text Available The estructural analysis of the shell septum interrelationship in sorne Jurassic ammonites allows us to conclude that sutural simplifications occurred throughout the phylogeny, were originated by alterations in the external morphology of the shell. In the case of Physodoceratinae the simplification observed in the morphology of the septal suture may have a double origin. First, an increase in the size of periumbilical tubercles may determine a shallowing of sutural elements and a shortening of saddle and lobe frilling. In other cases, shallowing is determined by a decrease in the whorl expansion rate, an apparent shortening of secondary branching not being observed.El análisis estructural de la interrelación concha-septo en algunos ammonites del Jurásico superior lleva a concluir que las simplificaciones suturales aparecidas a lo largo de la filogenia fueron originadas por alteraciones ocurridas en la morfología externa de la concha. En el caso concreto de la subfamilia Physodoceratinae, la simplificación observada en la morfología de la sutura puede tener un doble origen. En primer lugar, un incremento en el tamaño de los tubérculos periumbilicales puede determinar una pérdida de profundidad de los elementos de la sutura. siempre acompañada de una disminución en las indentaciones (frilling de sillas y lóbulos. En otros casos el acortamiento en profundidad está determinado por una disminución de la tasa de expansión de la espira, sin que se observe un acortamiento aparente de las ramificaciones secundarias.
Impact of pipes networks simplification on water hammer phenomenon
Indian Academy of Sciences (India)
Simplification of water supply networks is an indispensible design step to make the original network easier to be analysed. The impact of networks' simplification on water hammer phenomenon is investigated. This study uses two loops network with different diameters, thicknesses, and roughness coefficients. The network is ...
Quantum copying and simplification of the quantum Fourier transform
Niu, Chi-Sheng
Theoretical studies of quantum computation and quantum information theory are presented in this thesis. Three topics are considered: simplification of the quantum Fourier transform in Shor's algorithm, optimal eavesdropping in the BB84 quantum cryptographic protocol, and quantum copying of one qubit. The quantum Fourier transform preceding the final measurement in Shor's algorithm is simplified by replacing a network of quantum gates with one that has fewer and simpler gates controlled by classical signals. This simplification results from an analysis of the network using the consistent history approach to quantum mechanics. The optimal amount of information which an eavesdropper can gain, for a given level of noise in the communication channel, is worked out for the BB84 quantum cryptographic protocol. The optimal eavesdropping strategy is expressed in terms of various quantum networks. A consistent history analysis of these networks using two conjugate quantum bases shows how the information gain in one basis influences the noise level in the conjugate basis. The no-cloning property of quantum systems, which is the physics behind quantum cryptography, is studied by considering copying machines that generate two imperfect copies of one qubit. The best qualities these copies can have are worked out with the help of the Bloch sphere representation for one qubit, and a quantum network is worked out for an optimal copying machine. If the copying machine does not have additional ancillary qubits, the copying process can be viewed using a 2-dimensional subspace in a product space of two qubits. A special representation of such a two-dimensional subspace makes possible a complete characterization of this type of copying. This characterization in turn leads to simplified eavesdropping strategies in the BB84 and the B92 quantum cryptographic protocols.
Miao, Xiuxiu; Gerke, Kirill M.; Sizonenko, Timofey O.
2017-07-01
Pore-network models were found useful in describing important flow and transport mechanisms and in predicting flow properties of different porous media relevant to numerous fundamental and industrial applications. Pore-networks provide very fast computational framework and permit simulations on large volumes of pores. This is possible due to significant pore space simplifications and linear/exponential relationships between effective properties and geometrical characteristics of the pore elements. To make such relationships work, pore-network elements are usually simplified by circular, triangular, square and other basic shapes. However, such assumptions result in inaccurate prediction of transport properties. In this paper, we propose that pore-networks can be constructed without pore shape simplifications. To test this hypothesize we extracted 3292 2D pore element cross-sections from 3D X-ray microtomography images of sandstone and carbonate rock samples. Based on the circularity, convexity and elongation of each pore element we trained neural networks to predict the dimensionless hydraulic conductance. The optimal neural network provides 90% of predictions lying within the 20% error bounds compared against direct numerical simulation results. Our novel approach opens a new way to parameterize pore-networks and we outlined future improvements to create a new class of pore-network models without pore shape simplifications.
Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation
2016-01-01
River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource
The complexities of HIPAA and administration simplification.
Mozlin, R
2000-11-01
The Health Insurance Portability and Accessibility Act (HIPAA) was signed into law in 1996. Although focused on information technology issues, HIPAA will ultimately impact day-to-day operations at multiple levels within any clinical setting. Optometrists must begin to familiarize themselves with HIPAA in order to prepare themselves to practice in a technology-enriched environment. Title II of HIPAA, entitled "Administration Simplification," is intended to reduce the costs and administrative burden of healthcare by standardizing the electronic transmission of administrative and financial transactions. The Department of Health and Human Services is expected to publish the final rules and regulations that will govern HIPAA's implementation this year. The rules and regulations will cover three key aspects of healthcare delivery: electronic data interchange (EDI), security and privacy. EDI will standardize the format for healthcare transactions. Health plans must accept and respond to all transactions in the EDI format. Security refers to policies and procedures that protect the accuracy and integrity of information and limit access. Privacy focuses on how the information is used and disclosure of identifiable health information. Security and privacy regulations apply to all information that is maintained and transmitted in a digital format and require administrative, physical, and technical safeguards. HIPAA will force the healthcare industry to adopt an e-commerce paradigm and provide opportunities to improve patient care processes. Optometrists should take advantage of the opportunity to develop more efficient and profitable practices.
Material Modelling - Composite Approach
DEFF Research Database (Denmark)
Nielsen, Lauge Fuglsang
1997-01-01
This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite-rheological ......This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite......-rheological model of concrete is presented by which consistent predictions of creep, relaxation, and internal stresses can be made from known concrete composition, age at loading, and climatic conditions. No other existing "creep prediction method" offers these possibilities in one approach.The model...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...
A Data-Driven Point Cloud Simplification Framework for City-Scale Image-Based Localization.
Cheng, Wentao; Lin, Weisi; Zhang, Xinfeng; Goesele, Michael; Sun, Ming-Ting
2017-01-01
City-scale 3D point clouds reconstructed via structure-from-motion from a large collection of Internet images are widely used in the image-based localization task to estimate a 6-DOF camera pose of a query image. Due to prohibitive memory footprint of city-scale point clouds, image-based localization is difficult to be implemented on devices with limited memory resources. Point cloud simplification aims to select a subset of points to achieve a comparable localization performance using the original point cloud. In this paper, we propose a data-driven point cloud simplification framework by taking it as a weighted K-Cover problem, which mainly includes two complementary parts. First, a utility-based parameter determination method is proposed to select a reasonable parameter K for K-Cover-based approaches by evaluating the potential of a point cloud for establishing sufficient 2D-3D feature correspondences. Second, we formulate the 3D point cloud simplification problem as a weighted K-Cover problem, and propose an adaptive exponential weight function based on the visibility probability of 3D points. The experimental results on three popular datasets demonstrate that the proposed point cloud simplification framework outperforms the state-of-the-art methods for the image-based localization application with a well predicted parameter in the K-Cover problem.
Heyman, Bob; Swain, John; Gillman, Maureen
2004-01-01
This paper explores the role of complexity and simplification in the delivery of health care for adults with learning disabilities, drawing upon qualitative data obtained in a study carried out in NE England. It is argued that the requirement to manage complex health needs with limited resources causes service providers to simplify, standardise and routinise care. Simplified service models may work well enough for the majority of clients, but can impede recognition of the needs of those whose characteristics are not congruent with an adopted model. The data were analysed in relation to the core category, identified through thematic analysis, of secondary complexity arising from organisational simplification. Organisational simplification generates secondary complexity when operational routines designed to make health complexity manageable cannot accommodate the needs of non-standard service users. Associated themes, namely the social context of services, power and control, communication skills, expertise and service inclusiveness and evaluation are explored in relation to the core category. The concept of secondary complexity resulting from organisational simplification may partly explain seemingly irrational health service provider behaviour.
Application of Stochastic Approaches to Modelling Suspension Flow in Porous Media
DEFF Research Database (Denmark)
Shapiro, Alexander; Yuan, Hao
2012-01-01
The goal of this chapter is to overview several stochastic approaches to modelling suspension flows in porous media, including the population balance approach, the continuous time random walk (CTRW) approach, and its reduction to the elliptic equation approach. Most of these approaches emerged...... briefly discussed. The population balance models growing out of the Boltzmann-Smolukhowski formalism take into account the particle and the pore size distributions. A system of integral-differential kinetic equations for the particle transport is derived and averaged. The continuous-time random walk...... theory considers the distribution of the residence times of particles in pores. The transport equation derived in the framework of CTRW contains a convolution integral with a memory kernel accounting for the particle flight distribution. An important simplification of the CTRW formalism, its reduction...
The cost of policy simplification in conservation incentive programs
DEFF Research Database (Denmark)
Armsworth, Paul R.; Acs, Szvetlana; Dallimer, Martin
2012-01-01
of biodiversity. Common policy simplifications result in a 49100% loss in biodiversity benefits depending on the conservation target chosen. Failure to differentiate prices for conservation improvements in space is particularly problematic. Additional implementation costs that accompany more complicated policies...
New technique for system simplification using Cuckoo search and ESA
Indian Academy of Sciences (India)
invariant systems.Motivated by optimization and various system simplification techniques available in the literature, the proposed technique is formulated using Cuckoo search in combination with Le´vy flight and Eigen spectrum analysis. Theefficacy ...
Simplification of arboreal marsupial assemblages in response to increasing urbanization.
Directory of Open Access Journals (Sweden)
Bronwyn Isaac
Full Text Available Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.
Simplification of Arboreal Marsupial Assemblages in Response to Increasing Urbanization
Isaac, Bronwyn; White, John; Ierodiaconou, Daniel; Cooke, Raylene
2014-01-01
Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density) but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide. PMID:24608165
Pathways of DNA unlinking: A story of stepwise simplification.
Stolz, Robert; Yoshida, Masaaki; Brasher, Reuben; Flanner, Michelle; Ishihara, Kai; Sherratt, David J; Shimokawa, Koya; Vazquez, Mariel
2017-09-29
In Escherichia coli DNA replication yields interlinked chromosomes. Controlling topological changes associated with replication and returning the newly replicated chromosomes to an unlinked monomeric state is essential to cell survival. In the absence of the topoisomerase topoIV, the site-specific recombination complex XerCD- dif-FtsK can remove replication links by local reconnection. We previously showed mathematically that there is a unique minimal pathway of unlinking replication links by reconnection while stepwise reducing the topological complexity. However, the possibility that reconnection preserves or increases topological complexity is biologically plausible. In this case, are there other unlinking pathways? Which is the most probable? We consider these questions in an analytical and numerical study of minimal unlinking pathways. We use a Markov Chain Monte Carlo algorithm with Multiple Markov Chain sampling to model local reconnection on 491 different substrate topologies, 166 knots and 325 links, and distinguish between pathways connecting a total of 881 different topologies. We conclude that the minimal pathway of unlinking replication links that was found under more stringent assumptions is the most probable. We also present exact results on unlinking a 6-crossing replication link. These results point to a general process of topology simplification by local reconnection, with applications going beyond DNA.
Emergency planning simplification: Why ALWR designs shall support this goal
International Nuclear Information System (INIS)
Tripputi, I.
2004-01-01
Emergency Plan simplification, could be achieved only if it can proved, in a context of balanced national health protection policies, that there is a reduced or no technical need for some elements of it and that public protection is assured in all considered situations regardless of protective actions outside the plant. These objectives may be technically supported if one or more of the following conditions are complied with: 1. Accidents potentially releasing large amounts of fission products can be ruled out by characteristics of the designs 2. Plant engineered features (and the containment system in particular) are able to drastically mitigate the radioactive releases under all conceivable scenarios. 3. A realistic approach to the consequence evaluation can reduce the expected consequences to effects below any concern. Unfortunately no one single approach is either technically feasible or justified in a perspective of defense in depth and only a mix of them may provide the necessary conditions. It appears that most or all proposed ALWR designs address the technical issues, whose solutions are the bases to eliminate the need for a number of protective actions (evacuation, relocation, sheltering, iodine tablets administration, etc.) even in the case of a severe accident. Some designs are mainly oriented to prevent the need for short term protective actions; they credit simplified Emergency Plans or the capabilities of existing civil protection organizations for public relocation in the long term, if needed. Others take also into account the overall releases to exclude or minimize public relocation and land contamination. Design targets for population individual doses and for land contamination proposed in Italy are discussed in the paper. It is also shown that these limits, while challenging, appear to be within the reach of the next generation proposed designs currently studied in Italy. (author)
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
A Gaussian graphical model approach to climate networks
Energy Technology Data Exchange (ETDEWEB)
Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks
International Nuclear Information System (INIS)
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-01-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately
Cutting red tape: national strategies for administrative simplification
National Research Council Canada - National Science Library
Cerri, Fabienne; Hepburn, Glen; Barazzoni, Fiorenza
2006-01-01
... when the topic was new, and had a strong focus on the tools used to simplify administrative regulations. Expectations are greater today, and ad hoc simplification initiatives have in many cases been replaced by comprehensive government programmes to reduce red tape. Some instruments, such as one-stop shops, which were new then, have become widely adop...
Interphonology Variability: Sociolinguistic Factors Affecting L2 Simplification Strategies.
Lin, Yuh-Huey
2003-01-01
Investigates variability in interlanguage consonant cluster simplification strategies within the four factors--style, gender, proficiency, and interlocutor. Examined how these factors determine Chinese English-as-a-Foreign-Language (EFL) speakers' production of English word-initial consonant clusters. (Author/VWL)
New technique for system simplification using Cuckoo search and ESA
Indian Academy of Sciences (India)
Afzal Sikander
2017-08-17
Aug 17, 2017 ... Motivated by optimization and various system simplification techniques available in the literature, the proposed technique is formulated using Cuckoo search in combination with Lйvy flight and Eigen spectrum analysis. The efficacy and ... Therefore, for better understanding of the system, this lower order ...
Impact of pipes networks simplification on water hammer phenomenon
Indian Academy of Sciences (India)
The potable water distribution system is one of the most significant hydraulic engineering accomplishments. ... quality equivalence, and demands' concentration simplifications of pipes networks on the transient pressure head .... systems to help control increase and decrease in pressure due to water hammer. The Young's.
Aspects of a Theory of Simplification, Debugging, and Coaching.
Fischer, Gerhard; And Others
This paper analyses new methods of teaching skiing in terms of a computational paradigm for learning called increasingly complex microworlds (ICM). Examining the factors that underlie the dramatic enhancement of the learning of skiing led to the focus on the processes of simplification, debugging, and coaching. These three processes are studied in…
Impact of pipes networks simplification on water hammer phenomenon
Indian Academy of Sciences (India)
Abstract. Simplification of water supply networks is an indispensible design step to make the original network easier to be analysed. The impact of networks' sim- plification on water hammer phenomenon is investigated. This study uses two loops network with different diameters, thicknesses, and roughness coefficients.
Impact of pipes networks simplification on water hammer phenomenon
Indian Academy of Sciences (India)
The network is fed from a boundary head reservoir and loaded by either distributed or concentrated boundary water demands. According to both hydraulic and hydraulic plus water quality equivalence, three simplification levels are performed. The effect of demands' concentration on the transient flow is checked.
Modeling of squirrel induction machine by modified magnetic equivalent circuit approach
International Nuclear Information System (INIS)
Milimonfared, J.; Meshgin Kelk, H.
2002-01-01
In this paper a modified magnetic equivalent circuit approach is presented for steady state and transient analysis of squirrel cage induction motor. The effects of all spatial harmonics, stator and rotor tooth reluctance, saturation effects, rotor skewing, air gap permanencies and leakage perveance of slots, and type of winging connection are taken into account. Since there is no restriction on stator winding, rotor bar and air gap length symmetry this approach is able to model the induction motor under both healthy and faulty conditions. In the proposed model, by considering the actual dimensions of stator and rotor laminations, some simplifications have been considered. Hence, the number of variables in the system of algebraic of the system of algebraic equations improves and this leads to better convergence of numerical solution
Landscape simplification reduces classical biological control and crop yield.
Grab, Heather; Danforth, Bryan; Poveda, Katja; Loeb, Greg
2018-03-01
Agricultural intensification resulting in the simplification of agricultural landscapes is known to negatively impact the delivery of key ecosystem services such as the biological control of crop pests. Both conservation and classical biological control may be influenced by the landscape context in which they are deployed; yet studies examining the role of landscape structure in the establishment and success of introduced natural enemies and their interactions with native communities are lacking. In this study, we investigated the relationship between landscape simplification, classical and conservation biological control services and importantly, the outcome of these interactions for crop yield. We showed that agricultural simplification at the landscape scale is associated with an overall reduction in parasitism rates of crop pests. Additionally, only introduced parasitoids were identified, and no native parasitoids were found in crop habitat, irrespective of agricultural landscape simplification. Pest densities in the crop were lower in landscapes with greater proportions of semi-natural habitats. Furthermore, farms with less semi-natural cover in the landscape and consequently, higher pest numbers, had lower yields than farms in less agriculturally dominated landscapes. Our study demonstrates the importance of landscape scale agricultural simplification in mediating the success of biological control programs and highlights the potential risks to native natural enemies in classical biological control programs against native insects. Our results represent an important contribution to an understanding of the landscape-mediated impacts on crop yield that will be essential to implementing effective policies that simultaneously conserve biodiversity and ecosystem services. © 2018 by the Ecological Society of America.
THE ELITISM OF LEGAL LANGUAGE AND THE NEED OF SIMPLIFICATION
Directory of Open Access Journals (Sweden)
Antonio Escandiel de Souza
2016-12-01
Full Text Available This article presents the results of the research project entitled “Simplification of legal language: a study on the view of the academic community of the University of Cruz Alta”. It is a qualitative nature study on simplifying the legal language as a means of democratizing/pluralize access to justice, in the view of scholars and Law Course teachers. There is great difficulty by society in the understanding of legal terms, which hinders access to justice. Similarly, the legal field is not far, of their traditional formalities, which indicates the existence of a parallel where, on one hand, is society, with its problems of understanding, and the other the law, its inherent and intrinsic procedures. However, the company may not have access to the judiciary hampered on account of formalities arising from the law and its flowery language. Preliminary results indicate simplification of legal language as essential to real democratization of access to Law/Justice.
Stand management optimization – the role of simplifications
Directory of Open Access Journals (Sweden)
Timo Pukkala
2014-02-01
Full Text Available Background Studies on optimal stand management often make simplifications or restrict the choice of treatments. Examples of simplifications are neglecting natural regeneration that appears on a plantation site, omitting advance regeneration in simulations, or restricting thinning treatments to low thinning (thinning from below. Methods This study analyzed the impacts of simplifications on the optimization results for Fennoscandian boreal forests. Management of pine and spruce plantations was optimized by gradually reducing the number of simplifying assumptions. Results Forced low thinning, cleaning the plantation from the natural regeneration of mixed species and ignoring advance regeneration all had a major impact on optimization results. High thinning (thinning from above resulted in higher NPV and longer rotation length than thinning from below. It was profitable to leave a mixed stand in the tending treatment of young plantation. When advance regeneration was taken into account, it was profitable to increase the number of thinnings and postpone final felling. In the optimal management, both pine and spruce plantation was gradually converted into uneven-aged mixture of spruce and birch. Conclusions The results suggest that, with the current management costs and timber price level, it may be profitable to switch to continuous cover management on medium growing sites of Fennoscandian boreal forests.
HEDR modeling approach: Revision 1
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies
HEDR modeling approach: Revision 1
Energy Technology Data Exchange (ETDEWEB)
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.
Modeling Approaches in Planetary Seismology
Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick
2014-01-01
Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.
Ecosystem simplification, biodiversity loss and plant virus emergence.
Roossinck, Marilyn J; García-Arenal, Fernando
2015-02-01
Plant viruses can emerge into crops from wild plant hosts, or conversely from domestic (crop) plants into wild hosts. Changes in ecosystems, including loss of biodiversity and increases in managed croplands, can impact the emergence of plant virus disease. Although data are limited, in general the loss of biodiversity is thought to contribute to disease emergence. More in-depth studies have been done for human viruses, but studies with plant viruses suggest similar patterns, and indicate that simplification of ecosystems through increased human management may increase the emergence of viral diseases in crops. Copyright © 2015 Elsevier B.V. All rights reserved.
The minimum attention plant inherent safety through LWR simplification
International Nuclear Information System (INIS)
Turk, R.S.; Matzie, R.A.
1987-01-01
The Minimum Attention Plant (MAP) is a unique small LWR that achieves greater inherent safety, improved operability, and reduced costs through design simplification. The MAP is a self-pressurized, indirect-cycle light water reactor with full natural circulation primary coolant flow and multiple once-through steam generators located within the reactor vessel. A fundamental tenent of the MAP design is its complete reliance on existing LWR technology. This reliance on conventional technology provides an extensive experience base which gives confidence in judging the safety and performance aspects of the design
Hybrid approach for the assessment of PSA models by means of binary decision diagrams
Energy Technology Data Exchange (ETDEWEB)
Ibanez-Llano, Cristina, E-mail: cristina.ibanez@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain); Rauzy, Antoine, E-mail: Antoine.RAUZY@3ds.co [Dassault Systemes, 10 rue Marcel Dassault CS 40501, 78946 Velizy Villacoublay Cedex (France); Melendez, Enrique, E-mail: ema@csn.e [Consejo de Seguridad Nuclear (CSN), C/Justo Dorado 11, 28040 Madrid (Spain); Nieto, Francisco, E-mail: nieto@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain)
2010-10-15
Binary decision diagrams are a well-known alternative to the minimal cutsets approach to assess the reliability Boolean models. They have been applied successfully to improve the fault trees models assessment. However, its application to solve large models, and in particular the event trees coming from the PSA studies of the nuclear industry, remains to date out of reach of an exact evaluation. For many real PSA models it may be not possible to compute the BDD within reasonable amount of time and memory without considering the truncation or simplification of the model. This paper presents a new approach to estimate the exact probabilistic quantification results (probability/frequency) based on combining the calculation of the MCS and the truncation limits, with the BDD approach, in order to have a better control on the reduction of the model and to properly account for the success branches. The added value of this methodology is that it is possible to ensure a real confidence interval of the exact value and therefore an explicit knowledge of the error bound. Moreover, it can be used to measure the acceptability of the results obtained with traditional techniques. The new method was applied to a real life PSA study and the results obtained confirm the applicability of the methodology and open a new viewpoint for further developments.
Branding approach and valuation models
Directory of Open Access Journals (Sweden)
Mamula Tatjana
2006-01-01
Full Text Available Much of the skill of marketing and branding nowadays is concerned with building equity for products whose characteristics, pricing, distribution and availability are really quite close to each other. Brands allow the consumer to shop with confidence. The real power of successful brands is that they meet the expectations of those that buy them or, to put it another way, they represent a promise kept. As such they are a contract between a seller and a buyer: if the seller keeps to its side of the bargain, the buyer will be satisfied; if not, the buyer will in future look elsewhere. Understanding consumer perceptions and associations is an important first step to understanding brand preferences and choices. In this paper, we discuss different models to measure value of brand according to couple of well known approaches according to request by companies. We rely upon several empirical examples.
Landscape simplification filters species traits and drives biotic homogenization
Gámez-Virués, Sagrario; Perović, David J.; Gossner, Martin M.; Börschig, Carmen; Blüthgen, Nico; de Jong, Heike; Simons, Nadja K.; Klein, Alexandra-Maria; Krauss, Jochen; Maier, Gwen; Scherber, Christoph; Steckel, Juliane; Rothenwöhrer, Christoph; Steffan-Dewenter, Ingolf; Weiner, Christiane N.; Weisser, Wolfgang; Werner, Michael; Tscharntke, Teja; Westphal, Catrin
2015-01-01
Biodiversity loss can affect the viability of ecosystems by decreasing the ability of communities to respond to environmental change and disturbances. Agricultural intensification is a major driver of biodiversity loss and has multiple components operating at different spatial scales: from in-field management intensity to landscape-scale simplification. Here we show that landscape-level effects dominate functional community composition and can even buffer the effects of in-field management intensification on functional homogenization, and that animal communities in real-world managed landscapes show a unified response (across orders and guilds) to both landscape-scale simplification and in-field intensification. Adults and larvae with specialized feeding habits, species with shorter activity periods and relatively small body sizes are selected against in simplified landscapes with intense in-field management. Our results demonstrate that the diversity of land cover types at the landscape scale is critical for maintaining communities, which are functionally diverse, even in landscapes where in-field management intensity is high. PMID:26485325
New technique for system simplification using Cuckoo search and ESA
Indian Academy of Sciences (India)
Afzal Sikander
2017-08-17
Aug 17, 2017 ... other popular techniques available in the literature. References. [1] Sinha N K and Lastman G J 1990 Reduced order models for complex systems—a critical survey. IETE Techn. Rev. 7(1):. 33–40. [2] Ghosh S and Senroy N 2013 Balanced truncation approach to power system model order reduction. Electr.
Ecosystem models are by definition simplifications of the real ...
African Journals Online (AJOL)
spamer
regulated by endogenous biological clocks (Anderson and Keafer 1987, Eilertsen et al. 1995, McQuoid and. Hobson 1995). During the vegetative phase, modulation of mixing rates and other environmental factors attributable to the algae themselves may affect growth. (Jenkinson and Wyatt 1992, Wyatt and Jenkinson.
Directory of Open Access Journals (Sweden)
Türkay Gökgöz
2015-10-01
Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.; Kunc, Vlastimil; Norris, Robert E.; Phelps, Jay; Tucker III, Charles L.
2006-11-30
This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understanding of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Subthalamic stimulation: toward a simplification of the electrophysiological procedure.
Fetter, Damien; Derrey, Stephane; Lefaucheur, Romain; Borden, Alaina; Wallon, David; Chastan, Nathalie; Maltete, David
2016-06-01
The aim of the present study was to assess the consequences of a simplification of the electrophysiological procedure on the post-operative clinical outcome after subthalamic nucleus implantation in Parkinson disease. Microelectrode recordings were performed on 5 parallel trajectories in group 1 and less than 5 trajectories in group 2. Clinical evaluations were performed 1 month before and 6 months after surgery. After surgery, the UPDRS III score in the off-drug/on-stimulation and on-drug/on-stimulation conditions significantly improved by 66,9% and 82%, respectively in group 1, and by 65.8% and 82.3% in group 2 (P<0.05). Meanwhile, the total number of words (P<0.05) significantly decreased for fluency tasks in both groups. Motor disability improvement and medication reduction were similar in both groups. Our results suggest that the electrophysiological procedure should be simplified as the team's experience increases.
Geological heterogeneity: Goal-oriented simplification of structure and characterization needs
Savoy, Heather; Kalbacher, Thomas; Dietrich, Peter; Rubin, Yoram
2017-11-01
Geological heterogeneity, i.e. the spatial variability of discrete hydrogeological units, is investigated in an aquifer analog of glacio-fluvial sediments to determine how such a geological structure can be simplified for characterization needs. The aquifer analog consists of ten hydrofacies whereas the scarcity of measurements in typical field studies precludes such detailed spatial models of hydraulic properties. Of particular interest is the role of connectivity of the hydrofacies structure, along with its effect on the connectivity of mass transport, in site characterization for predicting early arrival times. Transport through three realizations of the aquifer analog is modeled with numerical particle tracking to ascertain the fast flow channel through which early arriving particles travel. Three simplification schemes of two-facies models are considered to represent the aquifer analogs, and the velocity within the fast flow channel is used to estimate the apparent hydraulic conductivity of the new facies. The facies models in which the discontinuous patches of high hydraulic conductivity are separated from the rest of the domain yield the closest match in early arrival times compared to the aquifer analog, but assuming a continuous high hydraulic conductivity channel connecting these patches yields underestimated early arrivals times within the range of variability between the realizations, which implies that the three simplification schemes could be advised but pose different implications for field measurement campaigns. Overall, the results suggest that the result of transport connectivity, i.e. early arrival times, within realistic geological heterogeneity can be conserved even when the underlying structural connectivity is modified.
Learning Action Models: Qualitative Approach
Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite
The limits of simplification in translated isiZulu health texts | Ndlovu ...
African Journals Online (AJOL)
Simplification, defined as the practice of simplifying the language used in translation, is regarded as one of the universal features of translation. This article investigates the limitations of simplification encountered in efforts to make translated isiZulu health texts more accessible to the target readership. The focus is on public ...
modeling, observation and control, a multi-model approach
Elkhalil, Mansoura
2011-01-01
This thesis is devoted to the control of systems which dynamics can be suitably described by a multimodel approach from an investigation study of a model reference adaptative control performance enhancement. Four multimodel control approaches have been proposed. The first approach is based on an output reference model control design. A successful experimental validation involving a chemical reactor has been carried out. The second approach is based on a suitable partial state model reference ...
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...... methods suited for finite identifiability of particular types of deterministic actions....
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S.
2017-05-01
We exploit the 1 + 1 + 2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
A variable projection approach for efficient estimation of RBF-ARX model.
Gan, Min; Li, Han-Xiong; Peng, Hui
2015-03-01
The radial basis function network-based autoregressive with exogenous inputs (RBF-ARX) models have much more linear parameters than nonlinear parameters. Taking advantage of this special structure, a variable projection algorithm is proposed to estimate the model parameters more efficiently by eliminating the linear parameters through the orthogonal projection. The proposed method not only substantially reduces the dimension of parameter space of RBF-ARX model but also results in a better-conditioned problem. In this paper, both the full Jacobian matrix of Golub and Pereyra and the Kaufman's simplification are used to test the performance of the algorithm. An example of chaotic time series modeling is presented for the numerical comparison. It clearly demonstrates that the proposed approach is computationally more efficient than the previous structured nonlinear parameter optimization method and the conventional Levenberg-Marquardt algorithm without the parameters separated. Finally, the proposed method is also applied to a simulated nonlinear single-input single-output process, a time-varying nonlinear process and a real multiinput multioutput nonlinear industrial process to illustrate its usefulness.
The femur as a musculo-skeletal construct: a free boundary condition modelling approach.
Phillips, A T M
2009-07-01
Previous finite element studies of the femur have made simplifications to varying extents with regard to the boundary conditions used during analysis. Fixed boundary conditions are generally applied to the distal femur when examining the proximal behaviour at the hip joint, while the same can be said for the proximal femur when examining the distal behaviour at the knee joint. While fixed boundary condition analyses have been validated against in vitro experiments it remains a matter of debate as to whether the numerical and experimental models are indicative of the in vivo situation. This study presents a finite element model in which the femur is treated as a complete musculo-skeletal construct, spanning between the hip and knee joints. Linear and non-linear implementations of a free boundary condition modelling approach are applied to the bone through the explicit inclusion of muscles and ligaments spanning both the hip joint and the knee joint. A non-linear force regulated, muscle strain based activation strategy was found to result in lower observed principal strains in the cortex of the femur, compared to a linear activation strategy. The non-linear implementation of the model in particular, was found to produce hip and knee joint reaction forces consistent with in vivo data from instrumented implants.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
appeal in building systems which operate robustly over a wide range of operating conditions by decomposing them into a number of simplerlinear modelling or control problems, even for nonlinear modelling or control problems. This appeal has been a factor in the development of increasinglypopular `local...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning....... The underlying question is `How should we partition the system - what is `local'?'. This book presents alternative ways of bringing submodels together,which lead to varying levels of performance and insight. Some are further developed for autonomous learning of parameters from data, while others havefocused...
Modeling software behavior a craftsman's approach
Jorgensen, Paul C
2009-01-01
A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth
System Behavior Models: A Survey of Approaches
2016-06-01
the Petri model allowed a quick assessment of all potential states but was more cumbersome to build than the MP model. A comparison of approaches...identical state space results. The combined state space graph of the Petri model allowed a quick assessment of all potential states but was more...59 INITIAL DISTRIBUTION LIST ...................................................................................65 ix LIST
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
A Non-linear Eulerian Approach for Assessment of Health-cost Externalities of Air Pollution
DEFF Research Database (Denmark)
Andersen, Mikael Skou; Frohn, Lise Marie; Nielsen, Jytte Seested
Integrated assessment models which are used in Europe to account for the external costs of air pollution as a support for policy-making and cost-benefit analysis have in order to cope with complexity resorted to simplifications of the non-linear dynamics of atmospheric sciences. In this paper we...... explore the possible significance of such simplifications by reviewing the improvements that result from applying a state-of-the-art atmospheric model for regional transport and non-linear chemical transformations of air pollutants to the impact-pathway approach of the ExternE-method. The more rigorous...
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
USACE, Pittsburgh District ( LRP ) requested that the US Army Engineer Research and Development Center, Coastal and ERDC/CHL TR-13-9 2 Hydraulics...approaching the lock and dam. The second set of experiments considered a design, referred to as Plan B lock approach, which contained the weir field in...conditions and model parameters A discharge of 1.35 cfs was set as the inflow boundary condition at the upstream end of the model. The outflow boundary was
Hybrid approaches to physiologic modeling and prediction
Olengü, Nicholas O.; Reifman, Jaques
2005-05-01
This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.
Don’t Be Addicted: The Oft-Overlooked Dangers of Simplification
Directory of Open Access Journals (Sweden)
Michael Lissack
Full Text Available We are seldom taught that simplification has a high risk of failure. In truth, it only works up to a point, after which all that lies ahead is failure. To examine the limits of simplicity is to look at what happens when our efforts to make things fit into a sound bite, label, or keyword go awry. When simplification works, it can indeed be very effective. But simplification does not always work—so more is not necessarily better. And when simplification fails, it fails miserably. This article exposes the limitations of simplification as a design choice, explores the cognitive origins of why we often get led astray in making such a design choice, and explores how we might develop a set of practical heuristics to counter the seductiveness of simplicity itself. The goal is appropriateness and balance—what cybernetics calls requisite variety, and what many design practitioners call placing context in context. The article concludes with a heuristic to guide the practitioner on what to do when their efforts at simplification are failing.
Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields
Skraba, Primoz
2015-08-01
© 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.
Risk Modelling for Passages in Approach Channel
Directory of Open Access Journals (Sweden)
Leszek Smolarek
2013-01-01
Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.
A pharmacokinetic model of styrene inhalation with the fugacity approach.
Paterson, S; Mackay, D
1986-03-15
The physiologically based pharmacokinetic model of J. C. Ramsey and M. E. Andersen (1984, Toxicol. Appl. Pharmacol. 73, 159-175) of styrene inhalation in rats, with extrapolation to humans, was reformulated with the chemical equilibrium criterion of fugacity instead of concentration to describe compartment partitioning. Fugacity models have been used successfully to describe environmental partitioning processes which are similar in principle to pharmacokinetic processes. The fugacity and concentration models are mathematically equivalent and produce identical results. The use of fugacity provides direct insights into the relative chemical equilibrium partitioning status of compartments, thus facilitating interpretation of experimental and model data. It can help to elucidate dominant processes of transfer, reaction and accumulation, and the direction of diffusion. Certain model simplifications become apparent in which compartments which remain close to equilibrium may be grouped. Maximum steady-state tissue concentrations for a known exposure may be calculated readily. It is suggested that pharmacokinetic fugacity models can complement conventional concentration models and may facilitate linkage to fugacity models describing environmental sources, pathways, and exposure routes.
Equivalent Circuit Modeling of a Rotary Piezoelectric Motor
DEFF Research Database (Denmark)
El, Ghouti N.; Helbo, Jan
2000-01-01
In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about...
A heuristic approach for short-term operations planning in a catering company
DEFF Research Database (Denmark)
Farahani, Poorya; Grunow, Martin; Günther, H.O.
2009-01-01
planning in a novel iterative scheme. The production scheduling problem is solved through an MILP modeling approach which is based on a block planning formulation complemented by a heuristic simplification procedure. Our investigation was motivated by a catering company located in Denmark. The production...
NLP model and stochastic multi-start optimization approach for heat exchanger networks
International Nuclear Information System (INIS)
Núñez-Serna, Rosa I.; Zamora, Juan M.
2016-01-01
Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.
Towards new approaches in phenological modelling
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
SLS Navigation Model-Based Design Approach
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
A Conceptual Modeling Approach for OLAP Personalization
Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan
Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.
Directory of Open Access Journals (Sweden)
Klimenta Dardan O.
2017-01-01
Full Text Available The purpose of this paper is to propose a novel approach to analytical modelling of steady-state heat transfer from the exterior of totally enclosed fan-cooled induction motors. The proposed approach is based on the geometry simplification methods, energy balance equation, modified correlations for forced convection, the Stefan-Boltzmann law, air-flow velocity profiles, and turbulence factor models. To apply modified correlations for forced convection, the motor exterior is presented with surfaces of elementary 3-D shapes as well as the air-flow velocity profiles and turbulence factor models are introduced. The existing correlations for forced convection from a short horizontal cylinder and correlations for heat transfer from straight fins (as well as inter-fin surfaces in axial air-flows are modified by introducing the Prandtl number to the appropriate power. The correlations for forced convection from straight fins and inter-fin surfaces are derived from the existing ones for combined heat transfer (due to forced convection and radiation by using the forced-convection correlations for a single flat plate. Employing the proposed analytical approach, satisfactory agreement is obtained with experimental data from other studies.
Neural network approaches for noisy language modeling.
Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid
2013-11-01
Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.
Directory of Open Access Journals (Sweden)
Reza Kafipour
2016-12-01
Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
A multiscale modeling approach for biomolecular systems
Energy Technology Data Exchange (ETDEWEB)
Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)
2015-04-15
This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.
Quasirelativistic quark model in quasipotential approach
Matveev, V A; Savrin, V I; Sissakian, A N
2002-01-01
The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons
A new approach for developing adjoint models
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and
An equilibrium approach to modelling social interaction
Gallo, Ignacio
2009-07-01
The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi-population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution of the model is provided in the thermodynamical limit by finding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach.
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH
Directory of Open Access Journals (Sweden)
Andrei OGREZEANU
2015-06-01
Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
Methodology in Bi- and Multilingual Studies: From Simplification to Complexity
Aronin, Larissa; Jessner, Ulrike
2014-01-01
Research methodology is determined by theoretical approaches. This article discusses methods of multilingualism research in connection with theoretical developments in linguistics, psycholinguistics, sociolinguistics, and education. Taking a brief glance at the past, the article starts with a discussion of an issue underlying the choice of…
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Crime Modeling using Spatial Regression Approach
Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.
2018-01-01
Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
of it application on a social media maturity data-set. Specifically, we employ Necessary Condition Analysis (NCA) to identify maturity stage boundaries as necessary conditions and Qualitative Comparative Analysis (QCA) to arrive at multiple configurations that can be equally effective in progressing to higher......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...... characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration...
Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.
Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J
2016-08-01
To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Metabolic networks: a signal-oriented approach to cellular models.
Lengeler, J W
2000-01-01
Complete genomes, far advanced proteomes, and even 'metabolomes' are available for at least a few organisms, e.g., Escherichia coli. Systematic functional analyses of such complete data sets will produce a wealth of information and promise an understanding of the dynamics of complex biological networks and perhaps even of entire living organisms. Such complete and holistic descriptions of biological systems, however, will increasingly require a quantitative analysis and the help of mathematical models for simulating whole systems. In particular, new procedures are required that allow a meaningful reduction of the information derived from complex systems that will consequently be used in the modeling process. In this review the biological elements of such a modeling procedure will be described. In a first step, complex living systems must be structured into well-defined and clearly delimited functional units, the elements of which have a common physiological goal, belong to a single genetic unit, and respond to the signals of a signal transduction system that senses changes in physiological states of the organism. These functional units occur at each level of complexity and more complex units originate by grouping several lower level elements into a single, more complex unit. To each complexity level corresponds a global regulator that is epistatic over lower level regulators. After its structuring into modules (functional units), a biological system is converted in a second step into mathematical submodels that by progressive combination can also be assembled into more aggregated model structures. Such a simplification of a cell (an organism) reduces its complexity to a level amenable to present modeling capacities. The universal biochemistry, however, promises a set of rules valid for modeling biological systems, from unicellular microorganisms and cells, to multicellular organisms and to populations.
A Modeling Approach for Marine Observatory
Directory of Open Access Journals (Sweden)
Charbel Geryes Aoun
2015-02-01
Full Text Available Infrastructure of Marine Observatory (MO is an UnderWater Sensor Networks (UW-SN to perform collaborative monitoring tasks over a given area. This observation should take into consideration the environmental constraints since it may require specific tools, materials and devices (cables, servers, etc.. The logical and physical components that are used in these observatories provide data exchanged between the various devices of the environment (Smart Sensor, Data Fusion. These components provide new functionalities or services due to the long period running of the network. In this paper, we present our approach in extending the modeling languages to include new domain- specific concepts and constraints. Thus, we propose a meta-model that is used to generate a new design tool (ArchiMO. We illustrate our proposal with an example from the MO domain on object localization with several acoustics sensors. Additionally, we generate the corresponding simulation code for a standard network simulator using our self-developed domain-specific model compiler. Our approach helps to reduce the complexity and time of the design activity of a Marine Observatory. It provides a way to share the different viewpoints of the designers in the MO domain and obtain simulation results to estimate the network capabilities.
PET imaging for receptor occupancy: meditations on calculation and simplification.
Zhang, Yumin; Fox, Gerard B
2012-03-01
This invited mini-review briefly summarizes procedures and challenges of measuring receptor occupancy with positron emission tomography. Instead of describing the detailed analytic procedures of in vivo ligand-receptor imaging, the authors provide a pragmatic approach, along with personal perspectives, for conducting positron emission tomography imaging for receptor occupancy, and systematically elucidate the mathematics of receptor occupancy calculations in practical ways that can be understood with elementary algebra. The authors also share insights regarding positron emission tomography imaging for receptor occupancy to facilitate applications for the development of drugs targeting receptors in the central nervous system.
Utilizing 'hot words' in ParaConc to verify lexical simplification ...
African Journals Online (AJOL)
Lexical simplification strategies investigated are: using a superordinate or more general word, using a general word with extended meaning and using more familiar or common synonyms. The analysis gives the reader an idea about how some general words are used to translate technical language. It also displays that 'hot ...
Klein, Harriet B.; Liu-Shea, May
2009-01-01
Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…
2011-10-18
... D; Docket No. R-1433] RIN No. 7100 AD 83 Reserve Requirements of Depository Institutions: Reserves Simplification and Private Sector Adjustment Factor AGENCY: Board of Governors of the Federal Reserve System... public comment on proposed amendments to Regulation D, Reserve Requirements of Depository Institutions...
Agricultural landscape simplification and insecticide use in the Midwestern United States.
Meehan, Timothy D; Werling, Ben P; Landis, Douglas A; Gratton, Claudio
2011-07-12
Agronomic intensification has transformed many agricultural landscapes into expansive monocultures with little natural habitat. A pervasive concern is that such landscape simplification results in an increase in insect pest pressure, and thus an increased need for insecticides. We tested this hypothesis across a range of cropping systems in the Midwestern United States, using remotely sensed land cover data, data from a national census of farm management practices, and data from a regional crop pest monitoring network. We found that, independent of several other factors, the proportion of harvested cropland treated with insecticides increased with the proportion and patch size of cropland and decreased with the proportion of seminatural habitat in a county. We also found a positive relationship between the proportion of harvested cropland treated with insecticides and crop pest abundance, and a positive relationship between crop pest abundance and the proportion cropland in a county. These results provide broad correlative support for the hypothesized link between landscape simplification, pest pressure, and insecticide use. Using regression coefficients from our analysis, we estimate that, across the seven-state region in 2007, landscape simplification was associated with insecticide application to 1.4 million hectares and an increase in direct costs totaling between $34 and $103 million. Both the direct and indirect environmental costs of landscape simplification should be considered in design of land use policy that balances multiple ecosystem goods and services.
A nationwide modelling approach to decommissioning - 16182
International Nuclear Information System (INIS)
Kelly, Bernard; Lowe, Andy; Mort, Paul
2009-01-01
In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)
A multiscale approach for modeling atherosclerosis progression.
Exarchos, Konstantinos P; Carpegianni, Clara; Rigas, Georgios; Exarchos, Themis P; Vozzi, Federico; Sakellarios, Antonis; Marraccini, Paolo; Naka, Katerina; Michalis, Lambros; Parodi, Oberdan; Fotiadis, Dimitrios I
2015-03-01
Progression of atherosclerotic process constitutes a serious and quite common condition due to accumulation of fatty materials in the arterial wall, consequently posing serious cardiovascular complications. In this paper, we assemble and analyze a multitude of heterogeneous data in order to model the progression of atherosclerosis (ATS) in coronary vessels. The patient's medical record, biochemical analytes, monocyte information, adhesion molecules, and therapy-related data comprise the input for the subsequent analysis. As indicator of coronary lesion progression, two consecutive coronary computed tomography angiographies have been evaluated in the same patient. To this end, a set of 39 patients is studied using a twofold approach, namely, baseline analysis and temporal analysis. The former approach employs baseline information in order to predict the future state of the patient (in terms of progression of ATS). The latter is based on an approach encompassing dynamic Bayesian networks whereby snapshots of the patient's status over the follow-up are analyzed in order to model the evolvement of ATS, taking into account the temporal dimension of the disease. The quantitative assessment of our work has resulted in 93.3% accuracy for the case of baseline analysis, and 83% overall accuracy for the temporal analysis, in terms of modeling and predicting the evolvement of ATS. It should be noted that the application of the SMOTE algorithm for handling class imbalance and the subsequent evaluation procedure might have introduced an overestimation of the performance metrics, due to the employment of synthesized instances. The most prominent features found to play a substantial role in the progression of the disease are: diabetes, cholesterol and cholesterol/HDL. Among novel markers, the CD11b marker of leukocyte integrin complex is associated with coronary plaque progression.
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Model approach brings multi-level success.
Howell, Mark
2012-08-01
n an article that first appeared in US magazine, Medical Construction & Design, Mark Howell, senior vice-president of Skanska USA Building, based in Seattle, describes the design and construction of a new nine-storey, 350,000 ft2 extension to the Good Samaritan Hospital in Puyallup, Washington state. He explains how the use of an Integrated Project Delivery (IPD) approach by the key players, and extensive use of building information modelling (BIM), combined to deliver a healthcare facility that he believes should meet the needs of patients, families, and the clinical care team, 'well into the future'.
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
Centella asiatica attenuates Aβ-induced neurodegenerative spine loss and dendritic simplification.
Gray, Nora E; Zweig, Jonathan A; Murchison, Charles; Caruso, Maya; Matthews, Donald G; Kawamoto, Colleen; Harris, Christopher J; Quinn, Joseph F; Soumyanath, Amala
2017-04-12
The medicinal plant Centella asiatica has long been used to improve memory and cognitive function. We have previously shown that a water extract from the plant (CAW) is neuroprotective against the deleterious cognitive effects of amyloid-β (Aβ) exposure in a mouse model of Alzheimer's disease, and improves learning and memory in healthy aged mice as well. This study explores the physiological underpinnings of those effects by examining how CAW, as well as chemical compounds found within the extract, modulate synaptic health in Aβ-exposed neurons. Hippocampal neurons from amyloid precursor protein over-expressing Tg2576 mice and their wild-type (WT) littermates were used to investigate the effect of CAW and various compounds found within the extract on Aβ-induced dendritic simplification and synaptic loss. CAW enhanced arborization and spine densities in WT neurons and prevented the diminished outgrowth of dendrites and loss of spines caused by Aβ exposure in Tg2576 neurons. Triterpene compounds present in CAW were found to similarly improve arborization although they did not affect spine density. In contrast caffeoylquinic acid (CQA) compounds from CAW were able to modulate both of these endpoints, although there was specificity as to which CQAs mediated which effect. These data suggest that CAW, and several of the compounds found therein, can improve dendritic arborization and synaptic differentiation in the context of Aβ exposure which may underlie the cognitive improvement observed in response to the extract in vivo. Additionally, since CAW, and its constituent compounds, also improved these endpoints in WT neurons, these results may point to a broader therapeutic utility of the extract beyond Alzheimer's disease. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Nara Somaratne
2015-02-01
Full Text Available The article “Karst aquifer recharge: Comments on ‘Characteristics of Point Recharge in Karst Aquifers’, by Adrian D. Werner, 2014, Water 6, doi:10.3390/w6123727” provides misrepresentation in some parts of Somaratne [1]. The description of Uley South Quaternary Limestone (QL as unconsolidated or poorly consolidated aeolianite sediments with the presence of well-mixed groundwater in Uley South [2] appears unsubstantiated. Examination of 98 lithological descriptions with corresponding drillers’ logs show only two wells containing bands of unconsolidated sediments. In Uley South basin, about 70% of salinity profiles obtained by electrical conductivity (EC logging from monitoring wells show stratification. The central and north central areas of the basin receive leakage from the Tertiary Sand (TS aquifer thereby influencing QL groundwater characteristics, such as chemistry, age and isotope composition. The presence of conduit pathways is evident in salinity profiles taken away from TS water affected areas. Pumping tests derived aquifer parameters show strong heterogeneity, a typical characteristic of karst aquifers. Uley South QL aquifer recharge is derived from three sources; diffuse recharge, point recharge from sinkholes and continuous leakage of TS water. This limits application of recharge estimation methods, such as the conventional chloride mass balance (CMB as the basic premise of the CMB is violated. The conventional CMB is not suitable for accounting chloride mass balance in groundwater systems displaying extreme range of chloride concentrations and complex mixing [3]. Over simplification of karst aquifer systems to suit application of the conventional CMB or 1-D unsaturated modelling as described in Werner [2], is not suitable use of these recharge estimation methods.
Simulations of Heat Transport Phenomena in a Three-Dimensional Model of Knitted Fabric
Directory of Open Access Journals (Sweden)
Puszkarz A.K.
2016-09-01
Full Text Available The main goal of the current work is to analyse the three-dimensional approach for modelling knitted fabric structures for future analysis of physical properties and thermal phenomena. The introduced model assumes some simplification of morphology. First, fibres in knitted fabrics are described as monofilaments characterized by isotropic thermal properties. The current form of the considered knitted fabric is determined by morphological properties of the used monofilament and simplification of the stitch shape. This simplification was based on a particular technology for the knitting process that introduces both geometric parameters and physical material properties. Detailed descriptions of heat transfer phenomena can also be considered. A sensitivity analysis of the temperature field with respect to selected structural parameters was also performed.
Directory of Open Access Journals (Sweden)
Vitória M
2011-07-01
Full Text Available Jean B Nachega1–3, Michael J Mugavero4, Michele Zeier2, Marco Vitória5, Joel E Gallant3,61Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 2Department of Medicine and Centre for Infectious Diseases (CID, Stellenbosch University, Faculty of Health Sciences, Cape Town, South Africa; 3Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 4Division of Infectious Diseases, Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA; 5HIV Department, World Health Organization, Geneva, Switzerland; 6Department of Medicine, Division of Infectious Diseases, Johns Hopkins University School of Medicine, Baltimore, MD, USAAbstract: Since the advent of highly active antiretroviral therapy (HAART, the treatment of human immunodeficiency virus (HIV infection has become more potent and better tolerated. While the current treatment regimens still have limitations, they are more effective, more convenient, and less toxic than regimens used in the early HAART era, and new agents, formulations and strategies continue to be developed. Simplification of therapy is an option for many patients currently being treated with antiretroviral therapy (ART. The main goals are to reduce pill burden, improve quality of life and enhance medication adherence, while minimizing short- and long-term toxicities, reducing the risk of virologic failure and maximizing cost-effectiveness. ART simplification strategies that are currently used or are under study include the use of once-daily regimens, less toxic drugs, fixed-dose coformulations and induction-maintenance approaches. Improved adherence and persistence have been observed with the adoption of some of these strategies. The role of regimen simplification has implications not only for individual patients, but also for health care policy. With increased interest in ART regimen simplification, it is critical to
A new approach to modeling aviation accidents
Rao, Arjun Harsha
views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520
Fagerlund, F.; Niemi, A.
2007-01-01
The subsurface spreading behaviour of gasoline, as well as several other common soil- and groundwater pollutants (e.g. diesel, creosote), is complicated by the fact that it is a mixture of hundreds of different constituents, behaving differently with respect to e.g. dissolution, volatilisation, adsorption and biodegradation. Especially for scenarios where the non-aqueous phase liquid (NAPL) phase is highly mobile, such as for sudden spills in connection with accidents, it is necessary to simultaneously analyse the migration of the NAPL and its individual components in order to assess risks and environmental impacts. Although a few fully coupled, multi-phase, multi-constituent models exist, such models are highly complex and may be time consuming to use. A new, somewhat simplified methodology for modelling the subsurface migration of gasoline while taking its multi-constituent nature into account is therefore introduced here. Constituents with similar properties are grouped together into eight fractions. The migration of each fraction in the aqueous and gaseous phases as well as adsorption is modelled separately using a single-constituent multi-phase flow model, while the movement of the free-phase gasoline is essentially the same for all fractions. The modelling is done stepwise to allow updating of the free-phase gasoline composition at certain time intervals. The output is the concentration of the eight different fractions in the aqueous, gaseous, free gasoline and solid phases with time. The approach is evaluated by comparing it to a fully coupled multi-phase, multi-constituent numerical simulator in the modelling of a typical accident-type spill scenario, based on a tanker accident in northern Sweden. Here the PCFF method produces results similar to those of the more sophisticated, fully coupled model. The benefit of the method is that it is easy to use and can be applied to any single-constituent multi-phase numerical simulator, which in turn may have
Directory of Open Access Journals (Sweden)
Yan Li
2016-12-01
Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Systems Approaches to Modeling Chronic Mucosal Inflammation
Gao, Boning; Choudhary, Sanjeev; Wood, Thomas G.; Carmical, Joseph R.; Boldogh, Istvan; Mitra, Sankar; Minna, John D.; Brasier, Allan R.
2013-01-01
The respiratory mucosa is a major coordinator of the inflammatory response in chronic airway diseases, including asthma and chronic obstructive pulmonary disease (COPD). Signals produced by the chronic inflammatory process induce epithelial mesenchymal transition (EMT) that dramatically alters the epithelial cell phenotype. The effects of EMT on epigenetic reprogramming and the activation of transcriptional networks are known, its effects on the innate inflammatory response are underexplored. We used a multiplex gene expression profiling platform to investigate the perturbations of the innate pathways induced by TGFβ in a primary airway epithelial cell model of EMT. EMT had dramatic effects on the induction of the innate pathway and the coupling interval of the canonical and noncanonical NF-κB pathways. Simulation experiments demonstrate that rapid, coordinated cap-independent translation of TRAF-1 and NF-κB2 is required to reduce the noncanonical pathway coupling interval. Experiments using amantadine confirmed the prediction that TRAF-1 and NF-κB2/p100 production is mediated by an IRES-dependent mechanism. These data indicate that the epigenetic changes produced by EMT induce dynamic state changes of the innate signaling pathway. Further applications of systems approaches will provide understanding of this complex phenotype through deterministic modeling and multidimensional (genomic and proteomic) profiling. PMID:24228254
ECOMOD - An ecological approach to radioecological modelling
International Nuclear Information System (INIS)
Sazykina, Tatiana G.
2000-01-01
A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations
Axial turbomachine modelling with a quasi-2-D approach. Application to gas cooled reactor transients
International Nuclear Information System (INIS)
Nicolas Tauveron; Manuel Saez; Pascal Ferrand; Francis Leboeuf
2005-01-01
Full text of publication follows: In the frame of the international forum GenIV, CEA has selected two innovative concepts of High Temperature gas cooled Reactor. The first has a fast neutron spectrum, a robust refractory fuel and a direct cycle conversion. The second is a very high temperature reactor with a thermal neutron spectrum. Both concepts make use of technology derived from High Temperature Gas Reactor. Thermal hydraulic performances are a key issue for the design. For transient conditions and decay heat removal situations, the thermal hydraulic performance must remain as high as possible. In this context, all the transient situations, the incidental and accidental scenarios must be evaluated by a validated system code able to correctly describe, in particular, the thermal-hydraulics of the whole plant. With this type of reactor a special emphasis must be laid on turbomachinery modelling. A first step was to compute a HTGR concept using the steady-state characteristics of each element of the turbomachinery with the computer code CATHARE. In a hypothetical transient event (a 10 inches cold duct break of primary loop which causes a rapid depressurization and a decrease of the core mass flow rate) the results seem of great interest (as a forced convection was maintained by the compressors during the entire transient) but not sufficiently justified in the frame of 0D modelling of turbomachinery. A more precise description of the turbomachinery has been developed based on a quasi-two dimensional approach. Although this type of flow analysis is a simplification of a complex three dimensional system, it is able to describe the behaviour of a compressor or a turbine with a better understanding than the models based on component characteristics. This approach consists in the solving of 2D radially averaged Navier-Stokes equations with the hypothesis of circumferentially uniform flow. The assumption of quasi-steady behaviour is made: source terms for the lift and
British tax credit simplification, the intra-household distribution of income and family consumption
Fisher, Paul
2014-01-01
The UK Government enacted simplification of its tax credit system in 2003. An inter- esting consequence of the reform is that tax credit payments were split between partners in couples, causing a rare wallet to purse transfer. This paper presents evidence on the effects of the reform on family spending, using quasi-likelihood techniques, for a sample of low income couples with children. In areas of child goods, evidence of important spending increases are found, whereas spending decreases are...
Agricultural landscape simplification and insecticide use in the Midwestern United States
Meehan, Timothy D.; Werling, Ben P.; Landis, Douglas A.; Gratton, Claudio
2011-01-01
Agronomic intensification has transformed many agricultural landscapes into expansive monocultures with little natural habitat. A pervasive concern is that such landscape simplification results in an increase in insect pest pressure, and thus an increased need for insecticides. We tested this hypothesis across a range of cropping systems in the Midwestern United States, using remotely sensed land cover data, data from a national census of farm management practices, and data from a regional cr...
Higham, Timothy E.; Birn-Jeffery, Aleksandra V.; Collins, Clint E.; Hulsey, C. Darrin; Russell, Anthony P.
2015-01-01
Innovations permit the diversification of lineages, but they may also impose functional constraints on behaviors such as locomotion. Thus, it is not surprising that secondary simplification of novel locomotory traits has occurred several times among vertebrates and could potentially lead to exceptional divergence when constraints are relaxed. For example, the gecko adhesive system is a remarkable innovation that permits locomotion on surfaces unavailable to other animals, but has been lost or simplified in species that have reverted to a terrestrial lifestyle. We examined the functional and morphological consequences of this adaptive simplification in the Pachydactylus radiation of geckos, which exhibits multiple unambiguous losses or bouts of simplification of the adhesive system. We found that the rates of morphological and 3D locomotor kinematic evolution are elevated in those species that have simplified or lost adhesive capabilities. This finding suggests that the constraints associated with adhesion have been circumvented, permitting these species to either run faster or burrow. The association between a terrestrial lifestyle and the loss/reduction of adhesion suggests a direct link between morphology, biomechanics, and ecology. PMID:25548182
Study of the behaviour of trace elements in estuaries: experimental approaches and modeling
International Nuclear Information System (INIS)
Dange, Catherine
2002-01-01
studies of the biogeochemistry of Cd, Co and Cs in the estuarine environment and the knowledge obtained on the field. Experiments performed both in laboratory and in situ were necessary to check the validity of the assumptions of the model and to evaluate model parameters, which cannot be measured directly like to the sorption properties of natural particles. Radiotracers ( 109 Cd, 57 Co, 134 Cs) were used to determine physico-chemical key processes and environmental variables that control the speciation and the fate of Cd, Co and Cs. This approach, based on the use of spike with various radionuclides, allowed us to evaluate the affinity constants of particles to the four estuaries for the studied metals (global intrinsic complexation and exchange constants) and also the exchangeable particulate fraction estimated from the comparison of measured natural metals coefficients of distribution and coefficient of distribution of their radioactive equivalents. Other parameters, which are necessary to build the model (specific surface area, concentration of active surface sites, mean intrinsic acid-base constants,...), were independently estimated by various experimental approaches, applied in laboratory to particle samples taken throughout estuaries (electrochemical measurements, nitrogen adsorption using the BET method,...). The results of the validation indicate that in spite of its simplifications, the model reproduces in a satisfactory way the dissolved/particulate distributions measured for Cd, Co and Cs. With a predictive aim, this type of model must be coupled with a hydro-sedimentary transport model. (author)
Risk communication: a mental models approach
National Research Council Canada - National Science Library
Morgan, M. Granger (Millett Granger)
2002-01-01
... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...
Hahl, Sayuri K.; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
International Nuclear Information System (INIS)
Atanassov, Krassimir; Szmidt, Eulalia; Kacprzyk, Janusz; Atanassova, Vassia
2017-01-01
A new multiagent multicriteria decision making procedure is proposed that considerably extends the existing methods by making it possible to intelligently reduce the set of criteria to be accounted for. The method employs elements of the novel Intercriteria Analysis method. The use of new tools, notably the intuitionistic fuzzy pairs and intuitionistic fuzzy index matrices provides additional information about the problem, addressed in the decision making procedure. Key words: decision making, multiagent systems, multicriteria decision making, intercriteria analysis, intuitionistic fuzzy estimation
A Discrete Monetary Economic Growth Model with the MIU Approach
Directory of Open Access Journals (Sweden)
Wei-Bin Zhang
2008-01-01
Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.
Mathematical Modelling Approach in Mathematics Education
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Relaxed memory models: an operational approach
Boudol , Gérard; Petri , Gustavo
2009-01-01
International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
... of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry.
Investigation on the optimal simplified model of BIW structure using FEM
Directory of Open Access Journals (Sweden)
Mohammad Hassan Shojaeefard
Full Text Available Abstract At conceptual phases of designing a vehicle, engineers need simplified models to examine the structural and functional characteristics and apply custom modifications for achieving the best vehicle design. Using detailed finite-element (FE model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body-in-white (BIW, composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the extent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be regarded for simplification to optimize costs and time, while providing acceptable accuracy. Although different approaches for optimization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best models, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which examines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experimental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE-modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.
An Analysis of Simplification Strategies in a Reading Textbook of Japanese as a Foreign Language
Directory of Open Access Journals (Sweden)
Kristina HMELJAK SANGAWA
2016-06-01
Full Text Available Reading is one of the bases of second language learning, and it can be most effective when the linguistic difficulty of the text matches the reader's level of language proficiency. The present paper reviews previous research on the readability and simplification of Japanese texts, and presents an analysis of a collection of simplified texts for learners of Japanese as a foreign language. The simplified texts are compared to their original versions to uncover different strategies used to make the texts more accessible to learners. The list of strategies thus obtained can serve as useful guidelines for assessing, selecting, and devising texts for learners of Japanese as a foreign language.
Directory of Open Access Journals (Sweden)
Nejc Sarabon
2010-12-01
Full Text Available The aim of this study was to define the most appropriate gait measurement protocols to be used in our future studies in the Mobility in Ageing project. A group of young healthy volunteers took part in the study. Each subject carried out a 10-metre walking test at five different speeds (preferred, very slow, very fast, slow, and fast. Each walking speed was repeated three times, making a total of 15 trials which were carried out in a random order. Each trial was simultaneously analysed by three observers using three different technical approaches: a stop watch, photo cells and electronic kinematic dress. In analysing the repeatability of the trials, the results showed that of the five self-selected walking speeds, three of them (preferred, very fast, and very slow had a significantly higher repeatability of the average walking velocity, step length and cadence than the other two speeds. Additionally, the data showed that one of the three technical methods for gait assessment has better metric characteristics than the other two. In conclusion, based on repeatability, technical and organizational simplification, this study helped us to successfully define a simple and reliable walking test to be used in the main study of the project.
Models Portability: Some Considerations about Transdisciplinary Approaches
Giuliani, Alessandro
Some critical issues about the relative portability of models and solutions across disciplinary barriers are discussed. The risks linked to the use of models and theories coming from different disciplines are evidentiated with a particular emphasis on biology. A metaphorical use of conceptual tools coming from other fields is suggested, together with the unescapable need to judge about the relative merits of a model on the basis of the amount of facts relative to the particular domain of application it explains. Some examples of metaphorical modeling coming from biochemistry and psychobiology are briefly discussed in order to clarify the above positions.
Nonlinear Modeling of the PEMFC Based On NNARX Approach
Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo
2015-01-01
Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...
A visual approach for modeling spatiotemporal relations
R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares
2008-01-01
htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for
DIVERSE APPROACHES TO MODELLING THE ASSIMILATIVE ...
African Journals Online (AJOL)
This study evaluated the assimilative capacity of Ikpoba River using different approaches namely: homogeneous differential equation, ANOVA/Duncan Multiple rage test, first and second order differential equations, correlation analysis, Eigen values and eigenvectors, multiple linear regression, bootstrapping and far-field ...
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
in a computational (CFD) fluid dynamic model. The anaerobic Growth of a budding yeast population in a continuously run microbioreactor was used as example. The proposed integrated model describes the fluid flow, the local cell size and cell cycle position distributions, as well as the local concentrations of glucose...
A simplified approach to feedwater train modeling
International Nuclear Information System (INIS)
Ollat, X.; Smoak, R.A.
1990-01-01
This paper presents a method to simplify feedwater train models for power plants. A simple set of algebraic equations, based on mass and energy balances, is developed to replace complex representations of the components under certain assumptions. The method was tested and used to model the low pressure heaters of the Sequoyah Nuclear Plant in a larger simulation
Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier
2016-03-01
Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical
The workshop on ecosystems modelling approaches for South ...
African Journals Online (AJOL)
roles played by models in the OMP approach, and raises questions about the costs of the data collection. (in particular) needed to apply a multispecies modelling approach in South African fisheries management. It then summarizes the deliberations of workshops held by the Scientific Committees of two international ma-.
Galluzzi, Claudia; Bureca, Ivana; Guariglia, Cecilia; Romani, Cristina
2015-05-01
Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels - but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Chemotaxis: A Multi-Scale Modeling Approach
Bhowmik, Arpan
We are attempting to build a working simulation of population level self-organization in dictyostelium discoideum cells by combining existing models for chemo-attractant production and detection, along with phenomenological motility models. Our goal is to create a computationally-viable model-framework within which a population of cells can self-generate chemo-attractant waves and self-organize based on the directional cues of those waves. The work is a direct continuation of our previous work published in Physical Biology titled ``Excitable waves and direction-sensing in Dictyostelium Discoideum: steps towards a chemotaxis model''. This is a work in progress, no official draft/paper exists yet.
An Integrated Approach to Modeling Evacuation Behavior
2011-02-01
A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
A consortium approach to glass furnace modeling.
Energy Technology Data Exchange (ETDEWEB)
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
Fractal approach to computer-analytical modelling of tree crown
International Nuclear Information System (INIS)
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
A new model for quantum games based on the Marinatto–Weber approach
International Nuclear Information System (INIS)
Frąckiewicz, Piotr
2013-01-01
The Marinatto–Weber approach to quantum games is a straightforward way to apply the power of quantum mechanics to classical game theory. In the simplest case, the quantum scheme is that players manipulate their own qubits of a two-qubit state either with the identity 1 or the Pauli operator σ x . However, such a simplification of the scheme raises doubt as to whether it could really reflect a quantum game. In this paper we put forward examples which may constitute arguments against the present form of the Marinatto–Weber scheme. Next, we modify the scheme to eliminate the undesirable properties of the protocol by extending the players’ strategy sets. (paper)
Phytoplankton as Particles - A New Approach to Modeling Algal Blooms
2013-07-01
ER D C/ EL T R -1 3 -1 3 Civil Works Basic Research Program Phytoplankton as Particles – A New Approach to Modeling Algal Blooms E nv... Phytoplankton as Particles – A New Approach to Modeling Algal Blooms Carl F. Cerco and Mark R. Noel Environmental Laboratory U.S. Army Engineer Research... phytoplankton blooms can be modeled by treating phytoplankton as discrete particles capable of self- induced transport via buoyancy regulation or other
Contribution of a companion modelling approach
African Journals Online (AJOL)
2009-09-16
Sep 16, 2009 ... This paper describes the role of participatory modelling and simulation as a way to provide a meaningful framework to enable actors to understand the interdependencies in peri-urban catchment management. A role-playing game, connecting the quantitative and qualitative dynamics of the resources with ...
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
Muhammad Zaka Emad
2017-07-24
Jul 24, 2017 ... Abstract. Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by ...
Energy and development : A modelling approach
van Ruijven, B.J.|info:eu-repo/dai/nl/304834521
2008-01-01
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
Muhammad Zaka Emad
2017-07-24
Jul 24, 2017 ... pulse is applied as a stress history on the CRF stope. Blast wave data obtained from the on-site monitoring are very complex. It requires processing before interpreting and using it for numerical models. Generally, mining compa- nies hire geophysics experts for interpretation of such data. The blast wave ...
A new approach to model mixed hydrates
Czech Academy of Sciences Publication Activity Database
Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.
2018-01-01
Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www. science direct.com/ science /article/pii/S0378381217304983
Energy and Development. A Modelling Approach
International Nuclear Information System (INIS)
Van Ruijven, B.J.
2008-01-01
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used to explore possible future developments of the global energy system and identify policies to prevent potential problems. Such estimations of future energy use in developing countries are very uncertain. Crucial factors in the future energy use of these regions are electrification, urbanisation and income distribution, issues that are generally not included in present day global energy models. Model simulations in this thesis show that current insight in developments in low-income regions lead to a wide range of expected energy use in 2030 of the residential and transport sectors. This is mainly caused by many different model calibration options that result from the limited data availability for model development and calibration. We developed a method to identify the impact of model calibration uncertainty on future projections. We developed a new model for residential energy use in India, in collaboration with the Indian Institute of Science. Experiments with this model show that the impact of electrification and income distribution is less univocal than often assumed. The use of fuelwood, with related health risks, can decrease rapidly if the income of poor groups increases. However, there is a trade off in terms of CO2 emissions because these groups gain access to electricity and the ownership of appliances increases. Another issue is the potential role of new technologies in developing countries: will they use the opportunities of leapfrogging? We explored the potential role of hydrogen, an energy carrier that might play a central role in a sustainable energy system. We found that hydrogen only plays a role before 2050 under very optimistic assumptions. Regional energy
Energy and Development. A Modelling Approach
Energy Technology Data Exchange (ETDEWEB)
Van Ruijven, B.J.
2008-12-17
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used to explore possible future developments of the global energy system and identify policies to prevent potential problems. Such estimations of future energy use in developing countries are very uncertain. Crucial factors in the future energy use of these regions are electrification, urbanisation and income distribution, issues that are generally not included in present day global energy models. Model simulations in this thesis show that current insight in developments in low-income regions lead to a wide range of expected energy use in 2030 of the residential and transport sectors. This is mainly caused by many different model calibration options that result from the limited data availability for model development and calibration. We developed a method to identify the impact of model calibration uncertainty on future projections. We developed a new model for residential energy use in India, in collaboration with the Indian Institute of Science. Experiments with this model show that the impact of electrification and income distribution is less univocal than often assumed. The use of fuelwood, with related health risks, can decrease rapidly if the income of poor groups increases. However, there is a trade off in terms of CO2 emissions because these groups gain access to electricity and the ownership of appliances increases. Another issue is the potential role of new technologies in developing countries: will they use the opportunities of leapfrogging? We explored the potential role of hydrogen, an energy carrier that might play a central role in a sustainable energy system. We found that hydrogen only plays a role before 2050 under very optimistic assumptions. Regional energy
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Modelling thermal plume impacts - Kalpakkam approach
International Nuclear Information System (INIS)
Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.
2002-01-01
A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)
Modelling approach for photochemical pollution studies
International Nuclear Information System (INIS)
Silibello, C.; Catenacci, G.; Calori, G.; Crapanzano, G.; Pirovano, G.
1996-01-01
The comprehension of the relationships between primary pollutants emissions and secondary pollutants concentration and deposition is necessary to design policies and strategies for the maintenance of a healthy environment. The use of mathematical models is a powerful tool to assess the effect of the emissions and of physical and chemical transformations of pollutants on air quality. A photochemical model, Calgrid, developed by CARB (California Air Resources Board), has been used to test the effect of different meteorological and air quality, scenarios on the ozone concentration levels. This way we can evaluate the influence of these conditions to determine the most important chemical species and reactions in atmosphere. The ozone levels are strongly related to the reactive hydrocarbons concentrations and to the solar radiation flux
Colour texture segmentation using modelling approach
Czech Academy of Sciences Publication Activity Database
Haindl, Michal; Mikeš, Stanislav
2005-01-01
Roč. 3687, č. - (2005), s. 484-491 ISSN 0302-9743. [International Conference on Advances in Pattern Recognition /3./. Bath, 22.08.2005-25.08.2005] R&D Projects: GA MŠk 1M0572; GA AV ČR 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : colour texture segmentation * image models * segmentation benchmark Subject RIV: BD - Theory of Information
Tumour resistance to cisplatin: a modelling approach
International Nuclear Information System (INIS)
Marcu, L; Bezak, E; Olver, I; Doorn, T van
2005-01-01
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Smeared crack modelling approach for corrosion-induced concrete damage
DEFF Research Database (Denmark)
Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik
2017-01-01
In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were...
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Jackiw-Pi model: A superfield approach
Gupta, Saurabh
2014-12-01
We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Keyring models: An approach to steerability
Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun
2018-02-01
If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.
Functional RG approach to the Potts model
Ben Alì Zinati, Riccardo; Codello, Alessandro
2018-01-01
The critical behavior of the (n+1) -states Potts model in d-dimensions is studied with functional renormalization group techniques. We devise a general method to derive β-functions for continuous values of d and n and we write the flow equation for the effective potential (LPA’) when instead n is fixed. We calculate several critical exponents, which are found to be in good agreement with Monte Carlo simulations and ɛ-expansion results available in the literature. In particular, we focus on Percolation (n\\to0) and Spanning Forest (n\\to-1) which are the only non-trivial universality classes in d = 4,5 and where our methods converge faster.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Fusion modeling approach for novel plasma sources
International Nuclear Information System (INIS)
Melazzi, D; Manente, M; Pavarin, D; Cardinali, A
2012-01-01
The physics involved in the coupling, propagation and absorption of RF helicon waves (electronic whistler) in low temperature Helicon plasma sources is investigated by solving the 3D Maxwell-Vlasov model equations using a WKB asymptotic expansion. The reduced set of equations is formally Hamiltonian and allows for the reconstruction of the wave front of the propagating wave, monitoring along the calculation that the WKB expansion remains satisfied. This method can be fruitfully employed in a new investigation of the power deposition mechanisms involved in common Helicon low temperature plasma sources when a general confinement magnetic field configuration is allowed, unveiling new physical insight in the wave propagation and absorption phenomena and stimulating further research for the design of innovative and more efficient low temperature plasma sources. A brief overview of this methodology and its capabilities has been presented in this paper.
Carbonate rock depositional models: A microfacies approach
Energy Technology Data Exchange (ETDEWEB)
Carozzi, A.V.
1988-01-01
Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.
Wind Turbine Control: Robust Model Based Approach
DEFF Research Database (Denmark)
Mirzaei, Mahmood
. This is because, on the one hand, control methods can decrease the cost of energy by keeping the turbine close to its maximum efficiency. On the other hand, they can reduce structural fatigue and therefore increase the lifetime of the wind turbine. The power produced by a wind turbine is proportional...... to the square of its rotor radius, therefore it seems reasonable to increase the size of the wind turbine in order to capture more power. However as the size increases, the mass of the blades increases by cube of the rotor size. This means in order to keep structural feasibility and mass of the whole structure...... reasonable, the ratio of mass to size should be reduced. This trend results in more flexible structures. Control of the flexible structure of a wind turbine in a wind field with stochastic nature is very challenging. In this thesis we are examining a number of robust model based methods for wind turbine...
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Reduction of sources of error and simplification of the Carbon-14 urea breath test
International Nuclear Information System (INIS)
Bellon, M.S.
1997-01-01
Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially
Directory of Open Access Journals (Sweden)
Restelli U
2017-03-01
Full Text Available Umberto Restelli,1,2 Massimiliano Fabbiani,3 Simona Di Giambenedetto,3 Carmela Nappi,4 Davide Croce,1,2 1Centre for Research on Health Economics, Social and Health Care Management (CREMS, LIUC – Università Cattaneo, Castellanza, Italy; 2School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa; 3Institute of Clinical Infectious Diseases, Catholic University of Sacred Heart, 4Health Economics, Bristol-Myers Squibb S.r.l., Rome, Italy Background: This analysis aimed at evaluating the impact of a therapeutic strategy of treatment simplification of atazanavir (ATV+ ritonavir (r + lamivudine (3TC in virologically suppressed patients receiving ATV+r+2 nucleoside reverse transcriptase inhibitors (NRTIs on the budget of the Italian National Health Service (NHS.Methods: A budget impact model with a 5-year time horizon was developed based on the clinical data of Atlas-M trial at 48 weeks (in terms of percentage of patients experiencing virologic failure and adverse events, from the Italian NHS perspective. A scenario in which the simplification strategy was not considered was compared with three scenarios in which, among a target population of 1,892 patients, different simplification strategies were taken into consideration in terms of percentage of patients simplified on a yearly basis among those eligible for simplification. The costs considered were direct medical costs related to antiretroviral drugs, adverse events management, and monitoring activities.Results: The percentage of patients of the target population receiving ATV+r+3TC varies among the scenarios and is between 18.7% and 46.9% in year 1, increasing up to 56.3% and 84.4% in year 5. The antiretroviral treatment simplification strategy considered would lead to lower costs for the Italian NHS in a 5-year time horizon between –28.7 million € and –16.0 million €, with a reduction of costs between –22.1% (–3.6 million € and
A dual model approach to ground water recovery trench design
International Nuclear Information System (INIS)
Clodfelter, C.L.; Crouch, M.S.
1992-01-01
The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes
Dainese, Matteo; Isaac, Nick J B; Powney, Gary D; Bommarco, Riccardo; Öckinger, Erik; Kuussaari, Mikko; Pöyry, Juha; Benton, Tim G; Gabriel, Doreen; Hodgson, Jenny A; Kunin, William E; Lindborg, Regina; Sait, Steven M; Marini, Lorenzo
2017-08-01
Land-use change is one of the primary drivers of species loss, yet little is known about its effect on other components of biodiversity that may be at risk. Here, we ask whether, and to what extent, landscape simplification, measured as the percentage of arable land in the landscape, disrupts the functional and phylogenetic association between primary producers and consumers. Across seven European regions, we inferred the potential associations (functional and phylogenetic) between host plants and butterflies in 561 seminatural grasslands. Local plant diversity showed a strong bottom-up effect on butterfly diversity in the most complex landscapes, but this effect disappeared in simple landscapes. The functional associations between plant and butterflies are, therefore, the results of processes that act not only locally but are also dependent on the surrounding landscape context. Similarly, landscape simplification reduced the phylogenetic congruence among host plants and butterflies indicating that closely related butterflies become more generalist in the resources used. These processes occurred without any detectable change in species richness of plants or butterflies along the gradient of arable land. The structural properties of ecosystems are experiencing substantial erosion, with potentially pervasive effects on ecosystem functions and future evolutionary trajectories. Loss of interacting species might trigger cascading extinction events and reduce the stability of trophic interactions, as well as influence the longer term resilience of ecosystem functions. This underscores a growing realization that species richness is a crude and insensitive metric and that both functional and phylogenetic associations, measured across multiple trophic levels, are likely to provide additional and deeper insights into the resilience of ecosystems and the functions they provide. © 2017 John Wiley & Sons Ltd.
Simple queueing approach to segregation dynamics in Schelling model
Sobkowicz, Pawel
2007-01-01
A simple queueing approach for segregation of agents in modified one dimensional Schelling segregation model is presented. The goal is to arrive at simple formula for the number of unhappy agents remaining after the segregation.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
A systemic approach for modeling soil functions
Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute
2018-03-01
The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Towards Translating Graph Transformation Approaches by Model Transformations
Hermann, F.; Kastenberg, H.; Modica, T.; Karsai, G.; Taentzer, G.
2006-01-01
Recently, many researchers are working on semantics preserving model transformation. In the field of graph transformation one can think of translating graph grammars written in one approach to a behaviourally equivalent graph grammar in another approach. In this paper we translate graph grammars
An Almost Integration-free Approach to Ordered Response Models
van Praag, B.M.S.; Ferrer-i-Carbonell, A.
2006-01-01
'In this paper we propose an alternative approach to the estimation of ordered response models. We show that the Probit-method may be replaced by a simple OLS-approach, called P(robit)OLS, without any loss of efficiency. This method can be generalized to the analysis of panel data. For large-scale
Optimizing technology investments: a broad mission model approach
Shishko, R.
2003-01-01
A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.
A generalized quarter car modelling approach with frame flexibility ...
Indian Academy of Sciences (India)
... mass distribution and damping. Here we propose a generalized quarter-car modelling approach, incorporating both the frame as well as other-wheel ground contacts. Our approach is linear, uses Laplace transforms, involves vertical motions of key points of interest and has intermediate complexity with improved realism.
Numerical approaches to expansion process modeling
Directory of Open Access Journals (Sweden)
G. V. Alekseev
2017-01-01
Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to
Graphical approach to model reduction for nonlinear biochemical networks.
Holland, David O; Krainak, Nicholas C; Saucerman, Jeffrey J
2011-01-01
Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1)-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1)-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Graphical approach to model reduction for nonlinear biochemical networks.
Directory of Open Access Journals (Sweden)
David O Holland
Full Text Available Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1 it incorporates nonlinear system dynamics, and 2 it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Avoiding simplification strategies by introducing multi-objectiveness in real world problems
Rietveld, C.J.C.; Hendrix, G.P.; Berkers, F.T.H.M.; Croes, N.N.; Smit, S.K.
2010-01-01
In business analysis, models are sometimes oversimplified. We pragmatically approach many problems with a single financial objective and include monetary values for non-monetary variables. We enforce constraints which may not be as strict in reality. Based on a case in distributed energy production,
Automatic simplification of systems of reaction-diffusion equations by a posteriori analysis.
Maybank, Philip J; Whiteley, Jonathan P
2014-02-01
Many mathematical models in biology and physiology are represented by systems of nonlinear differential equations. In recent years these models have become increasingly complex in order to explain the enormous volume of data now available. A key role of modellers is to determine which components of the model have the greatest effect on a given observed behaviour. An approach for automatically fulfilling this role, based on a posteriori analysis, has recently been developed for nonlinear initial value ordinary differential equations [J.P. Whiteley, Model reduction using a posteriori analysis, Math. Biosci. 225 (2010) 44-52]. In this paper we extend this model reduction technique for application to both steady-state and time-dependent nonlinear reaction-diffusion systems. Exemplar problems drawn from biology are used to demonstrate the applicability of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Reusable Component Model Development Approach for Parallel and Distributed Simulation
Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng
2014-01-01
Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won
2012-09-01
SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.
Interoperable transactions in business models: A structured approach
Weigand, H.; Verharen, E.; Dignum, F.P.M.
1996-01-01
Recent database research has given much attention to the specification of "flexible" transactions that can be used in interoperable systems. Starting from a quite different angle, Business Process Modelling has approached the area of communication modelling as well (the Language/Action
A Model-Driven Approach to e-Course Management
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Product Trial Processing (PTP): a model approach from ...
African Journals Online (AJOL)
Product Trial Processing (PTP): a model approach from theconsumer's perspective. ... Global Journal of Social Sciences ... Among the constructs used in the model of consumer's processing of product trail includes; experiential and non- experiential attributes, perceived validity of product trial, consumer perceived expertise, ...
A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS
WEDEL, M; DESARBO, WS
1995-01-01
A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Numerical linked-cluster approach to quantum lattice models.
Rigol, Marcos; Bryant, Tyler; Singh, Rajiv R P
2006-11-03
We present a novel algorithm that allows one to obtain temperature dependent properties of quantum lattice models in the thermodynamic limit from exact diagonalization of small clusters. Our numerical linked-cluster approach provides a systematic framework to assess finite-size effects and is valid for any quantum lattice model. Unlike high temperature expansions, which have a finite radius of convergence in inverse temperature, these calculations are accurate at all temperatures provided the range of correlations is finite. We illustrate the power of our approach studying spin models on kagomé, triangular, and square lattices.
Metamodelling Approach and Software Tools for Physical Modelling and Simulation
Directory of Open Access Journals (Sweden)
Vitaliy Mezhuyev
2015-02-01
Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
An integrated modeling approach to age invariant face recognition
Alvi, Fahad Bashir; Pears, Russel
2015-03-01
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
Modeling Approaches and Systems Related to Structured Modeling.
1987-02-01
Lasdon > and Maturana > for surveys of several modern systems. A -6- N NN- %0 CAMPS (Lucas and Mitra >) -- Computer Assisted Mathe- %l...583-589. MATURANA , S. >. "Comparative Analysis of Mathematical Modeling Systems," informal note, Graduate School of Manage- ment, UCLA, February
Soil moisture simulations using two different modelling approaches
Czech Academy of Sciences Publication Activity Database
Šípek, Václav; Tesař, Miroslav
2013-01-01
Roč. 64, 3-4 (2013), s. 99-103 ISSN 0006-5471 R&D Projects: GA AV ČR IAA300600901; GA ČR GA205/08/1174 Institutional research plan: CEZ:AV0Z20600510 Keywords : soil moisture modelling * SWIM model * box modelling approach Subject RIV: DA - Hydrology ; Limnology http://www.boku.ac.at/diebodenkultur/volltexte/sondernummern/band-64/heft-3-4/sipek.pdf
A generic approach to haptic modeling of textile artifacts
Shidanshidi, H.; Naghdy, F.; Naghdy, G.; Wood Conroy, D.
2009-08-01
Haptic Modeling of textile has attracted significant interest over the last decade. In spite of extensive research, no generic system has been proposed. The previous work mainly assumes that textile has a 2D planar structure. They also require time-consuming measurement of textile properties in construction of the mechanical model. A novel approach for haptic modeling of textile is proposed to overcome the existing shortcomings. The method is generic, assumes a 3D structure for the textile, and deploys computational intelligence to estimate the mechanical properties of textile. The approach is designed primarily for display of textile artifacts in museums. The haptic model is constructed by superimposing the mechanical model of textile over its geometrical model. Digital image processing is applied to the still image of textile to identify its pattern and structure through a fuzzy rule-base algorithm. The 3D geometric model of the artifact is automatically generated in VRML based on the identified pattern and structure obtained from the textile image. Selected mechanical properties of the textile are estimated by an artificial neural network; deploying the textile geometric characteristics and yarn properties as inputs. The estimated mechanical properties are then deployed in the construction of the textile mechanical model. The proposed system is introduced and the developed algorithms are described. The validation of method indicates the feasibility of the approach and its superiority to other haptic modeling algorithms.
Impacts of Modelling Simplifications on Predicted Dispersion of Human Expiratory Droplets
DEFF Research Database (Denmark)
Liu, Li; Nielsen, Peter Vilhelm; Xu, Chunwen
2016-01-01
simplifying the room air condition into isothermal condition, or neglecting the body plume of the manikin. It will also change the microenvironment completely by simplifying the shape of human grid in to a robot shape. The trajectories of both the exhalation airflows and droplet nuclei are significantly...... different from a detailed shape of human body and mouth........ The exhalation airflows are compared and validated by measurement results of human subjects. The flow field between two manikins are found significantly influenced by their exhalation airflows. Mono-dispersed droplets with an initial diameter of 10 μm are released from one breathe of a manikin. All droplets...
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Renard, F M
1996-01-01
We calculate, using a Z-peak subtracted representation of four-fermion processes previously illustrated for the case of electron-positron annihilation into charged lepton-antilepton, the corresponding expressions of the new physics contributions for the case of final quark-antiquark states, allowing the possibility of both universal and non universal effects. Some examples of models of the first and of the second type are considered for several c.m. energy values, showing that remarkable simplifications are often introduced by our approach. In particular, for the case of a dimension-six lagrangian with anomalous gauge couplings, the same reduced number of parameters that would affect the observables of final leptonic states are essentially retained when one moves to final hadronic states. This leads to great simplifications in the elaboration of constraints and, as a gratifying byproduct, to the possibility of making the signal from these models clearly distinguishable from those from other (both universal an...
Moreau, P.; Raimbault, T.; Durand, P.; Gascuel-Odoux, C.; Salmon-Monviola, J.; Masson, V.; Cordier, M. O.
2010-05-01
To meet the objectives of the Water Framework Directive in terms of nitrate pollution of surface water, numerous mitigation options have been proposed. To support stakeholders' decision prior to the implementation of regulations, scenario analysis by models can be used as a prospective approach. The work developed an extensive virtual experiment design from an initial basic requirement of catchment managers. Specific objectives were (1) to test the ability of a distributed model (TNT2) to simulate hydrology and hydrochemistry on a watershed with a high diversity of production systems, (2) to analyse a large set of scenarios and their effects on water quality and (3) to propose an effective mode of communication between research scientists and catchment managers. The focus of the scenario, in accord with catchment managers' requirement, is put on winter catch crop (CC). 5 conditions of implantation in rotations, 3 CC durations and 2 CC harvest modes were tested. CC is favoured by managers because of its simplicity to implement on fields and its relative low influence on farm strategy. Calibration and validation periods were run from 1998 to 2007 and scenario simulation period from 2007 to 2020. Results have been provided, for each scenario, by compartment (soil, atmosphere, plant uptake, water) but especially in the form of nitrogen mass balance at the catchment scale. The scenarios were ranked by integrating positive and negative effects of each measure. This 3-step-process: translation of a simple stakeholder question into extensive set of scenarios (complexification) - modeling process and data analysis - restitution to catchments' manager into a simple integrative form (simplification), gives an operational tool for decision support. In term of water quality, the best improvements in nitrate concentrations at the outlet reached a decrease of 0.8 mgL-1 compared to a "business as usual" scenario and were achieved by exporting the CC residue, by extending CC
Wave Resource Characterization Using an Unstructured Grid Modeling Approach
Directory of Open Access Journals (Sweden)
Wei-Cheng Wu
2018-03-01
Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
A review of function modeling: Approaches and applications
Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.
2008-01-01
This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
. Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...
METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT
Directory of Open Access Journals (Sweden)
Gorbenkova Elena Vladimirovna
2017-10-01
Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of
An algebraic approach to modeling in software engineering
International Nuclear Information System (INIS)
Loegel, C.J.; Ravishankar, C.V.
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form
Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2017-01-01
Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.
Injury prevention risk communication: A mental models approach
DEFF Research Database (Denmark)
Austin, Laurel Cecelia; Fischhoff, Baruch
2012-01-01
Individuals' decisions and behaviour can play a critical role in determining both the probability and severity of injury. Behavioural decision research studies peoples' decision-making processes in terms comparable to scientific models of optimal choices, providing a basis for focusing...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...... and uses examples to discuss how the approach can be used to develop scientifically validated context-sensitive injury risk communications....
Assessing risk factors for dental caries: a statistical modeling approach.
Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella
2015-01-01
The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.
A modeling approach to hospital location for effective marketing.
Cokelez, S; Peacock, E
1993-01-01
This paper develops a mixed integer linear programming model for locating health care facilities. The parameters of the objective function of this model are based on factor rating analysis and grid method. Subjective and objective factors representative of the real life situations are incorporated into the model in a unique way permitting a trade-off analysis of certain factors pertinent to the location of hospitals. This results in a unified approach and a single model whose credibility is further enhanced by inclusion of geographical and demographical factors.
Mathematical and computer modeling of electro-optic systems using a generic modeling approach
Smith, M.I.; Murray-Smith, D.J.; Hickman, D.
2007-01-01
The conventional approach to modelling electro-optic sensor systems is to develop separate models for individual systems or classes of system, depending on the detector technology employed in the sensor and the application. However, this ignores commonality in design and in components of these systems. A generic approach is presented for modelling a variety of sensor systems operating in the infrared waveband that also allows systems to be modelled with different levels of detail and at diffe...
Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling
Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.
2016-01-01
The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...
Earthquake response analysis of RC bridges using simplified modeling approaches
Lee, Do Hyung; Kim, Dookie; Park, Taehyo
2009-07-01
In this paper, simplified modeling approaches describing the hysteretic behavior of reinforced concrete bridge piers are proposed. For this purpose, flexure-axial and shear-axial interaction models are developed and implemented into a nonlinear finite element analysis program. Comparative verifications for reinforced concrete columns prove that the analytical predictions obtained with the new formulations show good correlation with experimental results under various levels of axial forces and section types. In addition, analytical correlation studies for the inelastic earthquake response of reinforced concrete bridge structures are also carried out using the simplified modeling approaches. Relatively good agreement is observed in the results between the current modeling approach and the elaborated fiber models. It is thus encouraging that the present developments and approaches are capable of identifying the contribution of deformation mechanisms correctly. Subsequently, the present developments can be used as a simple yet effective tool for the deformation capacity evaluation of reinforced concrete columns in general and reinforced concrete bridge piers in particular.
Bianchi VI0 and III models: self-similar approach
International Nuclear Information System (INIS)
Belinchon, Jose Antonio
2009-01-01
We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.
Modeling and control approach to a distinctive quadrotor helicopter.
Wu, Jun; Peng, Hui; Chen, Qing; Peng, Xiaoyan
2014-01-01
The referenced quadrotor helicopter in this paper has a unique configuration. It is more complex than commonly used quadrotors because of its inaccurate parameters, unideal symmetrical structure and unknown nonlinear dynamics. A novel method was presented to handle its modeling and control problems in this paper, which adopts a MIMO RBF neural nets-based state-dependent ARX (RBF-ARX) model to represent its nonlinear dynamics, and then a MIMO RBF-ARX model-based global LQR controller is proposed to stabilize the quadrotor's attitude. By comparing with a physical model-based LQR controller and an ARX model-set-based gain scheduling LQR controller, superiority of the MIMO RBF-ARX model-based control approach was confirmed. This successful application verified the validity of the MIMO RBF-ARX modeling method to the quadrotor helicopter with complex nonlinearity. © 2013 Published by ISA. All rights reserved.
Software sensors based on the grey-box modelling approach
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.; Strube, Rune
1996-01-01
In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey......-box models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey...
Environmental Radiation Effects on Mammals A Dynamical Modeling Approach
Smirnova, Olga A
2010-01-01
This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...
Modelling dynamic ecosystems : venturing beyond boundaries with the Ecopath approach
Coll, Marta; Akoglu, E.; Arreguin-Sanchez, F.; Fulton, E. A.; Gascuel, D.; Heymans, J. J.; Libralato, S.; Mackinson, S.; Palomera, I.; Piroddi, C.; Shannon, L. J.; Steenbeek, J.; Villasante, S.; Christensen, V.
2015-01-01
Thirty years of progress using the Ecopath with Ecosim (EwE) approach in different fields such as ecosystem impacts of fishing and climate change, emergent ecosystem dynamics, ecosystem-based management, and marine conservation and spatial planning were showcased November 2014 at the conference "Ecopath 30 years-modelling dynamic ecosystems: beyond boundaries with EwE". Exciting new developments include temporal-spatial and end-to-end modelling, as well as novel applications to environmental ...
Regularization of quantum gravity in the matrix model approach
International Nuclear Information System (INIS)
Ueda, Haruhiko
1991-02-01
We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems. ...... in a SCADA system because the most important information on the specific system is provided on-line...
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
conservativeness level , the conservative probability of failure obtained from Section 4 must be maintained. The mathematical formulation of conservative model... CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...PDF and a probability of failure are selected from these predicted output PDFs at a user-specified conservativeness level for validation. For
Macho, Siegfried; Ledermann, Thomas
2011-01-01
The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…
Comparative flood damage model assessment: towards a European approach
Jongman, B.; Kreibich, H.; Apel, H.; Barredo, J. I.; Bates, P. D.; Feyen, L.; Gericke, A.; Neal, J.; Aerts, J. C. J. H.; Ward, P. J.
2012-12-01
There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth-damage functions) and exposure (i.e. asset values), whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.
A Final Approach Trajectory Model for Current Operations
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
The Generalised Ecosystem Modelling Approach in Radiological Assessment
Energy Technology Data Exchange (ETDEWEB)
Klos, Richard
2008-03-15
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij......, are estimated from binary systems; one binary interaction parameter per system. No additional mixing rules are needed for cross-associating systems, but combining rules are required, e.g. the Elliott rule or the so-called CR-1 rule. There is a very large class of mixtures, e.g. water or glycols with aromatic...... interaction parameters are often used for solvating systems; one for the physical part (kij) and one for the association part (βcross). This limits the predictive capabilities and possibilities of generalization of the model. In this work we present an approach to reduce the number of adjustable parameters...
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
. Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model......-based synthesis method must employ models at lower levels of aggregation and through combination rules for phenomena, generate (synthesize) new intensified unit operations. An efficient solution procedure for the synthesis problem is needed to tackle the potentially large number of options that would be obtained...
A Two Step Face Alignment Approach Using Statistical Models
Directory of Open Access Journals (Sweden)
Ying Cui
2012-10-01
Full Text Available Although face alignment using the Active Appearance Model (AAM is relatively stable, it is known to be sensitive to initial values and not robust under inconstant circumstances. In order to strengthen the ability of AAM performance for face alignment, a two step approach for face alignment combining AAM and Active Shape Model (ASM is proposed. In the first step, AAM is used to locate the inner landmarks of the face. In the second step, the extended ASM is used to locate the outer landmarks of the face under the constraint of the estimated inner landmarks by AAM. The two kinds of landmarks are then combined together to form the whole facial landmarks. The proposed approach is compared with the basic AAM and the progressive AAM methods. Experimental results show that the proposed approach gives a much more effective performance.
A review of function modeling : Approaches and applications
Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.
2008-01-01
This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research
The Bipolar Approach: A Model for Interdisciplinary Art History Courses.
Calabrese, John A.
1993-01-01
Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)
Model-independent approach for dark matter phenomenology ...
Indian Academy of Sciences (India)
We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...
A Behavioral Decision Making Modeling Approach Towards Hedging Services
Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.
2003-01-01
This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by
Comparing State SAT Scores Using a Mixture Modeling Approach
Kim, YoungKoung Rachel
2009-01-01
Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…
Export of microplastics from land to sea. A modelling approach
Siegfried, Max; Koelmans, A.A.; Besseling, E.; Kroeze, C.
2017-01-01
Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea.
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
Hidden Markov model-based approach for generation of Pitman ...
Indian Academy of Sciences (India)
Speech is one of the most basic means of human communication. ... human beings is carried out with the aid of communication and has facilitated the development ... Hidden Markov model-based approach for generation of PSL symbols. 279. Table 1. PSL basic strokes and English consonants. English consonant.
A novel Monte Carlo approach to hybrid local volatility models
van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,
Model-independent approach for dark matter phenomenology
Indian Academy of Sciences (India)
We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...
Model-independent approach for dark matter phenomenology ...
Indian Academy of Sciences (India)
Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...
Using artificial neural network approach for modelling rainfall–runoff ...
Indian Academy of Sciences (India)
Home; Journals; Journal of Earth System Science; Volume 122; Issue 2. Using artificial neural network approach for modelling ... Nevertheless, water level and flow records are essential in hydrological analysis for designing related water works of flood management. Due to the complexity of the hydrological process, ...
An Approach to Quality Estimation in Model-Based Development
DEFF Research Database (Denmark)
Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter
2004-01-01
We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...
Hidden Markov model-based approach for generation of Pitman ...
Indian Academy of Sciences (India)
In this paper, an approach for feature extraction using Mel frequency cep- stral coefficients (MFCC) and classification using hidden Markov models (HMM) for generating strokes comprising consonants and vowels (CV) in the process of production of Pitman shorthand language from spoken English is proposed. The.
Pruning Chinese trees : an experimental and modelling approach
Zeng, Bo
2001-01-01
Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.
Non-frontal Model Based Approach to Forensic Face Recognition
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
2012-01-01
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance
Reconciliation with oneself and with others: From approach to model
Directory of Open Access Journals (Sweden)
Nikolić-Ristanović Vesna
2010-01-01
Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.
EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES
Directory of Open Access Journals (Sweden)
Slavko Arsovski
2009-03-01
Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Modeling electricity spot and futures price dependence: A multifrequency approach
Malo, Pekka
2009-11-01
Electricity prices are known to exhibit multifractal properties. We accommodate this finding by investigating multifractal models for electricity prices. In this paper we propose a flexible Copula-MSM (Markov Switching Multifractal) approach for modeling spot and weekly futures price dynamics. By using a conditional copula function, the framework allows us to separately model the dependence structure, while enabling use of multifractal stochastic volatility models to characterize fluctuations in marginal returns. An empirical experiment is carried out using data from Nord Pool. A study of volatility forecasting performance for electricity spot prices reveals that multifractal techniques are a competitive alternative to GARCH models. We also demonstrate how the Copula-MSM model can be employed for finding optimal portfolios, which minimizes the Conditional Value-at-Risk.
Multiphysics modeling using COMSOL a first principles approach
Pryor, Roger W
2011-01-01
Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The
A simplified GIS approach to modeling global leaf water isoscapes.
Directory of Open Access Journals (Sweden)
Jason B West
Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment
A Genetic Algorithm Approach for Modeling a Grounding Electrode
Mishra, Arbind Kumar; Nagaoka, Naoto; Ametani, Akihiro
This paper has proposed a genetic algorithm based approach to determine a grounding electrode model circuit composed of resistances, inductances and capacitances. The proposed methodology determines the model circuit parameters based on a general ladder circuit directly from a measured result. Transient voltages of some electrodes were measured when applying a step like current. An EMTP simulation of a transient voltage on the grounding electrode has been carried out by adopting the proposed model circuits. The accuracy of the proposed method has been confirmed to be high in comparison with the measured transient voltage.
Barbagallo , Gabriele
2017-01-01
The systematic use of a so-called Cauchy theory often leads to too much simplification of reality. Indeed, some features of the microstructure are implicitly neglected in these approaches. Materials have microstructures on a fairly large scale (micron, millimeter, centimeter), the effect of which affects the macroscopic behavior. The Cauchy model is insufficient to describe their specific global behavior, linked for example to the concentration of forces or deformations, or to deformation mod...
Data-driven approach to dynamic visual attention modelling
Culibrk, Dubravko; Sladojevic, Srdjan; Riche, Nicolas; Mancas, Matei; Crnojevic, Vladimir
2012-06-01
Visual attention deployment mechanisms allow the Human Visual System to cope with an overwhelming amount of visual data by dedicating most of the processing power to objects of interest. The ability to automatically detect areas of the visual scene that will be attended to by humans is of interest for a large number of applications, from video coding, video quality assessment to scene understanding. Due to this fact, visual saliency (bottom-up attention) models have generated significant scientific interest in recent years. Most recent work in this area deals with dynamic models of attention that deal with moving stimuli (videos) instead of traditionally used still images. Visual saliency models are usually evaluated against ground-truth eye-tracking data collected from human subjects. However, there are precious few recently published approaches that try to learn saliency from eyetracking data and, to the best of our knowledge, no approaches that try to do so when dynamic saliency is concerned. The paper attempts to fill this gap and describes an approach to data-driven dynamic saliency model learning. A framework is proposed that enables the use of eye-tracking data to train an arbitrary machine learning algorithm, using arbitrary features derived from the scene. We evaluate the methodology using features from a state-of-the art dynamic saliency model and show how simple machine learning algorithms can be trained to distinguish between visually salient and non-salient parts of the scene.
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
Common modelling approaches for training simulators for nuclear power plants
International Nuclear Information System (INIS)
1990-02-01
Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs
Estimating a DIF decomposition model using a random-weights linear logistic test model approach.
Paek, Insu; Fukuhara, Hirotaka
2015-09-01
A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.
Plenary lecture: innovative modeling approaches applicable to risk assessments.
Oscar, T P
2011-06-01
Proper identification of safe and unsafe food at the processing plant is important for maximizing the public health benefit of food by ensuring both its consumption and safety. Risk assessment is a holistic approach to food safety that consists of four steps: 1) hazard identification; 2) exposure assessment; 3) hazard characterization; and 4) risk characterization. Risk assessments are modeled by mapping the risk pathway as a series of unit operations and associated pathogen events and then using probability distributions and a random sampling method to simulate the rare, random, variable and uncertain nature of pathogen events in the risk pathway. To model pathogen events, a rare event modeling approach is used that links a discrete distribution for incidence of the pathogen event with a continuous distribution for extent of the pathogen event. When applied to risk assessment, rare event modeling leads to the conclusion that the most highly contaminated food at the processing plant does not necessarily pose the highest risk to public health because of differences in post-processing risk factors among distribution channels and consumer populations. Predictive microbiology models for individual pathogen events can be integrated with risk assessment models using the rare event modeling method. Published by Elsevier Ltd.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A fuzzy approach for modelling radionuclide in lake system
International Nuclear Information System (INIS)
Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.
2013-01-01
Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem
Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach
Klein, Christian; Thieme, Christoph; Priesack, Eckart
2015-04-01
Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.
Bayesian approach to errors-in-variables in regression models
Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad
2017-05-01
In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
A screening-level modeling approach to estimate nitrogen ...
This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce
A fuzzy approach for modelling radionuclide in lake system.
Desai, H K; Christian, R A; Banerjee, J; Patra, A K
2013-10-01
Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of (3)H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict (3)H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and (3)H concentration at discharge point. The Output was (3)H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. Copyright © 2013 Elsevier Ltd. All rights reserved.
MARK DE REUVER; HARRY BOUWMAN; TIMBER HAAKER
2013-01-01
Literature on business models deals extensively with how to design new business models, but hardly with how to make the transition from an existing to a newly designed business model. The transition to a new business model raises several practical and strategic issues, such as how to replace an existing value proposition with a new one, when to acquire new resources and capabilities, and when to start new partnerships. In this paper, we coin the term business model roadmapping as an approach ...
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
A modal approach to modeling spatially distributed vibration energy dissipation.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph
2010-08-01
The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.
A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.
Chang, Chia-Wen; Tao, Chin-Wang
2017-09-01
This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.
Systematic approach to verification and validation: High explosive burn models
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code
A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen
2007-01-01
This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...
The place of quantitative energy models in a prospective approach
International Nuclear Information System (INIS)
Taverdet-Popiolek, N.
2009-01-01
Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)
Innovation Networks New Approaches in Modelling and Analyzing
Pyka, Andreas
2009-01-01
The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.
Research on teacher education programs: logic model approach.
Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M
2013-02-01
Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program. Copyright © 2012 Elsevier Ltd. All rights reserved.
Understanding complex urban systems multidisciplinary approaches to modeling
Gurr, Jens; Schmidt, J
2014-01-01
Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...
DISCRETE LATTICE ELEMENT APPROACH FOR ROCK FAILURE MODELING
Directory of Open Access Journals (Sweden)
Mijo Nikolić
2017-01-01
Full Text Available This paper presents the ‘discrete lattice model’, or, simply, the ‘lattice model’, developed for rock failure modeling. The main difficulties in numerical modeling, namely, those related to complex crack initiations and multiple crack propagations, their coalescence under the influence of natural disorder, and heterogeneities, are overcome using the approach presented in this paper. The lattice model is constructed as an assembly of Timoshenko beams, representing the cohesive links between the grains of the material, which are described by Voronoi polygons. The kinematics of the Timoshenko beams are enhanced by the embedded strong discontinuities in their axial and transversal directions so as to provide failure modes I, II, and III. The model presented is suitable for meso-scale rock simulations. The representative numerical simulations, in both 2D and 3D settings, are provided in order to illustrate the model’s capabilities.
Computer Modeling of Violent Intent: A Content Analysis Approach
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.
2014-01-03
We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.
A fuzzy approach to the Weighted Overlap Dominance model
DEFF Research Database (Denmark)
Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt
2013-01-01
Decision support models are required to handle the various aspects of multi-criteria decision problems in order to help the individual understand its possible solutions. In this sense, such models have to be capable of aggregating and exploiting different types of measurements and evaluations...... in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...... are introduced for characterizing the type of uncertainty being expressed by intervals, examining at the same time how the WOD model handles both non-interval as well as interval data, and secondly, relevance degrees are proposed for obtaining a ranking over the alternatives. Hence, a complete methodology...
A reservoir simulation approach for modeling of naturally fractured reservoirs
Directory of Open Access Journals (Sweden)
H. Mohammadi
2012-12-01
Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.
Fibroblast motility on substrates with different rigidities: modeling approach
Gracheva, Maria; Dokukina, Irina
2009-03-01
We develop a discrete model for cell locomotion on substrates with different rigidities and simulate experiments described in Lo, Wang, Dembo, Wang (2000) ``Cell movement is guided by the rigidity of the substrate'', Biophys. J. 79: 144-152. In these experiments fibroblasts were planted on a substrate with a step rigidity and showed preference for locomotion over stiffer side of the substrate when approaches the boundary between the soft and the stiff sides of the substrate. The model reproduces experimentally observed behavior of fibroblasts. In particular, we are able to show with our model how cell characteristics (such as cell length, shape, area and speed) change during cell crawling through the ``soft-stiff'' substrate boundary. Also, our model suggests the temporary increase of both cell speed and area in that very moment when cell leaves soft side of substrate.
Modeling fabrication of nuclear components: An integrative approach
Energy Technology Data Exchange (ETDEWEB)
Hench, K.W.
1996-08-01
Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.
THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS
Directory of Open Access Journals (Sweden)
O. Chernyak
2016-12-01
Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.
Risk Modeling Approaches in Terms of Volatility Banking Transactions
Directory of Open Access Journals (Sweden)
Angelica Cucşa (Stratulat
2016-01-01
Full Text Available The inseparability of risk and banking activity is one demonstrated ever since banking systems, the importance of the topic being presend in current life and future equally in the development of banking sector. Banking sector development is done in the context of the constraints of nature and number of existing risks and those that may arise, and serves as limiting the risk of banking activity. We intend to develop approaches to analyse risk through mathematical models by also developing a model for the Romanian capital market 10 active trading picks that will test investor reaction in controlled and uncontrolled conditions of risk aggregated with harmonised factors.
Modeling software with finite state machines a practical approach
Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter
2006-01-01
Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Understanding Gulf War Illness: An Integrative Modeling Approach
2017-10-01
high-order diffusion imaging in a rat model of Gulf War Illness. §These authors contributed equally to the work. Brain Behavior and Immunity. pii...astrocyte specific transcriptome responses to neurotoxicity. §These authors contributed equally to the work. Submitted for Internal CDC-NIOSH...Antagonist: Evaluation of Beneficial Effects for Gulf War Illness 4) GW160116 (Nathanson) Genomics approach to find gender specific mechanisms of GWI
An Approach for Modeling and Formalizing SOA Design Patterns
Tounsi , Imen; Hadj Kacem , Mohamed; Hadj Kacem , Ahmed; Drira , Khalil
2013-01-01
11 pages; International audience; Although design patterns has become increasingly popular, most of them are presented in an informal way, which can give rise to ambiguity and may lead to their incorrect usage. Patterns proposed by the SOA design pattern community are described with informal visual notations. Modeling SOA design patterns with a standard formal notation contributes to avoid misunderstanding by software architects and helps endowing design methods with refinement approaches for...
An approach for quantifying small effects in regression models.
Bedrick, Edward J; Hund, Lauren
2018-04-01
We develop a novel approach for quantifying small effects in regression models. Our method is based on variation in the mean function, in contrast to methods that focus on regression coefficients. Our idea applies in diverse settings such as testing for a negligible trend and quantifying differences in regression functions across strata. Straightforward Bayesian methods are proposed for inference. Four examples are used to illustrate the ideas.
A Conditional Approach to Panel Data Models with Common Shocks
Directory of Open Access Journals (Sweden)
Giovanni Forchini
2016-01-01
Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.
Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach
DEFF Research Database (Denmark)
Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper
2017-01-01
We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... are in focus, we show that qualitative agreement between rotatory strength parameters calculated by full quantum mechanical calculations and the more efficient embedding calculations can be obtained. An important aspect in the computation of reliable absorption parameters is the need for conformational...
Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach
Directory of Open Access Journals (Sweden)
Alistair McNair Senior
2016-01-01
Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Modeling Defibrillation of the Heart: Approaches and Insights
Trayanova, Natalia; Constantino, Jason; Ashihara, Takashi; Plank, Gernot
2012-01-01
Cardiac defibrillation, as accomplished nowadays by automatic, implantable devices (ICDs), constitutes the most important means of combating sudden cardiac death. While ICD therapy has proved to be efficient and reliable, defibrillation is a traumatic experience. Thus, research on defibrillation mechanisms, particularly aimed at lowering defibrillation voltage, remains an important topic. Advancing our understanding towards a full appreciation of the mechanisms by which a shock interacts with the heart is the most promising approach to achieve this goal. The aim of this paper is to assess the current state-of-the-art in ventricular defibrillation modeling, focusing on both numerical modeling approaches and major insights that have been obtained using defibrillation models, primarily those of realistic ventricular geometry. The paper showcases the contributions that modeling and simulation have made to our understanding of the defibrillation process. The review thus provides an example of biophysically based computational modeling of the heart (i.e., cardiac defibrillation) that has advanced the understanding of cardiac electrophysiological interaction at the organ level and has the potential to contribute to the betterment of the clinical practice of defibrillation. PMID:22273793
Ecotoxicological modelling of cosmetics for aquatic organisms: A QSTR approach.
Khan, K; Roy, K
2017-07-01
In this study, externally validated quantitative structure-toxicity relationship (QSTR) models were developed for toxicity of cosmetic ingredients on three different ecotoxicologically relevant organisms, namely Pseudokirchneriella subcapitata, Daphnia magna and Pimephales promelas following the OECD guidelines. The final models were developed by partial least squares (PLS) regression technique, which is more robust than multiple linear regression. The obtained model for P. subcapitata shows that molecular size and complexity have significant impacts on the toxicity of cosmetics. In case of P. promelas and D. magna, we found that the largest contribution to the toxicity was shown by hydrophobicity and van der Waals surface area, respectively. All models were validated using both internal and test compounds employing multiple strategies. For each QSTR model, applicability domain studies were also performed using the "Distance to Model in X-space" method. A comparison was made with the ECOSAR predictions in order to prove the good predictive performances of our developed models. Finally, individual models were applied to predict toxicity for an external set of 596 personal care products having no experimental data for at least one of the endpoints, and the compounds were ranked based on a decreasing order of toxicity using a scaling approach.
Consensus approach for modeling HTS assays using in silico descriptors
Directory of Open Access Journals (Sweden)
Ahmed eAbdelaziz Sayed
2016-02-01
Full Text Available The need for filling information gaps while reducing toxicity testing in animals is becoming more predominant in risk assessment. Recent legislations are accepting in silico approaches for predicting toxicological outcomes. This article describes the results of Quantitative Structure Activity Relationship (QSAR modeling efforts within Tox21 Data Challenge 2014, which calculated the best balanced accuracy across all molecular pathway endpoints as well as the highest scores for ATAD5 and mitochondrial membrane potential disruption. Automated QSPR workflow systems, OCHEM (http://ochem.eu, the analytics platform, KNIME and the statistics software, CRAN R, were used to conduct the analysis and develop consensus models using ten different descriptor sets. A detailed analysis of QSAR models for all 12 molecular pathways and the effect of underlying models’ accuracy on the quality of the consensus model are provided. The resulting consensus models yielded a balanced accuracy as high as 88.1%±0.6 for mitochondrial membrane disruptors. Such high balanced accuracy and use of the applicability domain show a promising potential for in silico modeling to complement design HTS screening experiments. The summary statistics of all models are publicly available online at https://github.com/amaziz/Tox21-Challenge-Publication while the developed consensus models can be accessed at http://ochem.eu/article/98009.
A multi-model ensemble approach to seabed mapping
Diesing, Markus; Stephens, David
2015-06-01
Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.
Policy harmonized approach for the EU agricultural sector modelling
Directory of Open Access Journals (Sweden)
G. SALPUTRA
2008-12-01
Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how common policies are implemented within analytical models. In the second section of the paper the AGMEMOD models structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the papers penultimate section, while section 5 concludes.;
Metabolic network modeling approaches for investigating the "hungry cancer".
Sharma, Ashwini Kumar; König, Rainer
2013-08-01
Metabolism is the functional phenotype of a cell, at a given condition, resulting from an intricate interplay of various regulatory processes. The study of these dynamic metabolic processes and their capabilities help to identify the fundamental properties of living systems. Metabolic deregulation is an emerging hallmark of cancer cells. This deregulation results in rewiring of the metabolic circuitry conferring an exploitative metabolic advantage for the tumor cells which leads to a distinct benefit in survival and lays the basis for unbound progression. Metabolism can be considered as a thermodynamic open-system in which source substrates of high value are being processed through a well established interconnected biochemical conversion system, strictly obeying physiochemical principles, generating useful intermediates and finally resulting in the release of byproducts. Based on this basic principle of an input-output balance, various models have been developed to interrogate metabolism elucidating its underlying functional properties. However, only a few modeling approaches have proved computationally feasible in elucidating the metabolic nature of cancer at a systems level. Besides this, statistical approaches have been set up to identify biochemical pathways being more relevant for specific types of tumor cells. In this review, we are briefly introducing the basic statistical approaches followed by the major modeling concepts. We have put an emphasis on the methods and their applications that have been used to a greater extent in understanding the metabolic remodeling of cancer. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multiscale modeling of alloy solidification using a database approach
Tan, Lijian; Zabaras, Nicholas
2007-11-01
A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required
Object-Oriented Approach to Modeling Units of Pneumatic Systems
Directory of Open Access Journals (Sweden)
Yu. V. Kyurdzhiev
2014-01-01
Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability
Vitoria, Marco; Ford, Nathan; Doherty, Meg; Flexner, Charles
2014-01-01
The global scale-up of antiretroviral therapy (ART) over the past decade represents one of the great public health and human rights achievements of recent times. Moving from an individualized treatment approach to a simplified and standardized public health approach has been critical to ART scale-up, simplifying both prescribing practices and supply chain management. In terms of the latter, the risk of stock-outs can be reduced and simplified prescribing practices support task shifting of care to nursing and other non-physician clinicians; this strategy is critical to increase access to ART care in settings where physicians are limited in number. In order to support such simplification, successive World Health Organization guidelines for ART in resource-limited settings have aimed to reduce the number of recommended options for first-line ART in such settings. Future drug and regimen choices for resource-limited settings will likely be guided by the same principles that have led to the recommendation of a single preferred regimen and will favour drugs that have the following characteristics: minimal risk of failure, efficacy and tolerability, robustness and forgiveness, no overlapping resistance in treatment sequencing, convenience, affordability, and compatibility with anti-TB and anti-hepatitis treatments.
Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches
Directory of Open Access Journals (Sweden)
Sudin eBhattacharya
2012-12-01
Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.
Overview of the FEP analysis approach to model development
International Nuclear Information System (INIS)
Bailey, L.
1998-01-01
This report heads a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP Analysis - compilation and structuring of a FEP database; Stage 2: Scenario and Conceptual Model Development; Stage 3: Mathematical Model Development; Stage 4: Software Development; Stage 5: confidence Building. This report describes the development and structuring of a FEP database as a Master Directed Diagram (MDD) and explains how this may be used to identify different modelling scenarios, based upon the identification of scenario -defining FEPs. The methodology describes how the possible evolution of a repository system can be addressed in terms of a base scenario, a broad and reasonable representation of the 'natural' evolution of the system, and a number of variant scenarios, representing the effects of probabilistic events and processes. The MDD has been used to identify conceptual models to represent the base scenario and the interactions between these conceptual models have been systematically reviewed using a matrix diagram technique. This has led to the identification of modelling requirements for the base scenario, against which existing assessment software capabilities have been reviewed. A mechanism for combining probabilistic scenario-defining FEPs to construct multi-FEP variant scenarios has been proposed and trialled using the concept of a 'timeline', a defined sequence of events, from which consequences can be assessed. An iterative approach, based on conservative modelling principles, has been proposed for the evaluation of
Quantitative versus qualitative modeling: a complementary approach in ecosystem study.
Bondavalli, C; Favilla, S; Bodini, A
2009-02-01
Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.
Fugacity superposition: a new approach to dynamic multimedia fate modeling.
Hertwich, E G
2001-08-01
The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.
A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER
Directory of Open Access Journals (Sweden)
Hennie Husniah
2016-05-01
Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high. We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.
Steady state HNG combustion modeling
Energy Technology Data Exchange (ETDEWEB)
Louwers, J.; Gadiot, G.M.H.J.L. [TNO Prins Maurits Lab., Rijswijk (Netherlands); Brewster, M.Q. [Univ. of Illinois, Urbana, IL (United States); Son, S.F. [Los Alamos National Lab., NM (United States); Parr, T.; Hanson-Parr, D. [Naval Air Warfare Center, China Lake, CA (United States)
1998-04-01
Two simplified modeling approaches are used to model the combustion of Hydrazinium Nitroformate (HNF, N{sub 2}H{sub 5}-C(NO{sub 2}){sub 3}). The condensed phase is treated by high activation energy asymptotics. The gas phase is treated by two limit cases: the classical high activation energy, and the recently introduced low activation energy approach. This results in simplification of the gas phase energy equation, making an (approximate) analytical solution possible. The results of both models are compared with experimental results of HNF combustion. It is shown that the low activation energy approach yields better agreement with experimental observations (e.g. regression rate and temperature sensitivity), than the high activation energy approach.
A comprehensive approach to age-dependent dosimetric modeling
Energy Technology Data Exchange (ETDEWEB)
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.
A cascade modelling approach to flood extent estimation
Pedrozo-Acuña, Adrian; Rodríguez-Rincón, Juan Pablo; Breña-Naranjo, Agustin
2014-05-01
Recent efforts dedicated to the generation of new flood risk management strategies, have pointed out that a possible way forward for an improvement in this field relies on the reduction and quantification of uncertainties associated to the prediction system. With the purpose of reducing these uncertainties, this investigation follows a cascade modelling approach (meteorological - hydrological - 2D hydrodynamic) in combination with high-quality data (LiDAR, satellite imagery, precipitation), to study an extreme event registered last year in Mexico. The presented approach is useful for both, the characterisation of epistemic uncertainties and the generation of flood management strategies through probabilistic flood maps. Uncertainty is considered in both meteorological and hydrological models, and is propagated to a given flood extent as determined with a hydrodynamic model. Despite the methodology does not consider all the uncertainties that may be involved in the determination of a flooded area, it enables better understanding of the interaction between errors in the set-up of models and their propagation to a given result.
A comprehensive approach to age-dependent dosimetric modeling
International Nuclear Information System (INIS)
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks
Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.
Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J
2016-01-01
Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Micromechanical modeling and inverse identification of damage using cohesive approaches
International Nuclear Information System (INIS)
Blal, Nawfal
2013-01-01
In this study a micromechanical model is proposed for a collection of cohesive zone models embedded between two each elements of a standard cohesive-volumetric finite element method. An equivalent 'matrix-inclusions' composite is proposed as a representation of the cohesive-volumetric discretization. The overall behaviour is obtained using homogenization approaches (Hashin Shtrikman scheme and the P. Ponte Castaneda approach). The derived model deals with elastic, brittle and ductile materials. It is available whatever the triaxiality loading rate and the shape of the cohesive law, and leads to direct relationships between the overall material properties and the local cohesive parameters and the mesh density. First, rigorous bounds on the normal and tangential cohesive stiffnesses are obtained leading to a suitable control of the inherent artificial elastic loss induced by intrinsic cohesive models. Second, theoretical criteria on damageable and ductile cohesive parameters are established (cohesive peak stress, critical separation, cohesive failure energy,... ). These criteria allow a practical calibration of the cohesive zone parameters as function of the overall material properties and the mesh length. The main interest of such calibration is its promising capacity to lead to a mesh-insensitive overall response in surface damage. (author) [fr
Artificial Life of Soybean Plant Growth Modeling Using Intelligence Approaches
Directory of Open Access Journals (Sweden)
Atris Suyantohadi
2010-03-01
Full Text Available The natural process on plant growth system has a complex system and it has could be developed on characteristic studied using intelligent approaches conducting with artificial life system. The approaches on examining the natural process on soybean (Glycine Max L.Merr plant growth have been analyzed and synthesized in these research through modeling using Artificial Neural Network (ANN and Lindenmayer System (L-System methods. Research aimed to design and to visualize plant growth modeling on the soybean varieties which these could help for studying botany of plant based on fertilizer compositions on plant growth with Nitrogen (N, Phosphor (P and Potassium (K. The soybean plant growth has been analyzed based on the treatments of plant fertilizer compositions in the experimental research to develop plant growth modeling. By using N, P, K fertilizer compositions, its capable result on the highest production 2.074 tons/hectares. Using these models, the simulation on artificial life for describing identification and visualization on the characteristic of soybean plant growth could be demonstrated and applied.
Vertically-integrated Approaches for Carbon Sequestration Modeling
Bandilla, K.; Celia, M. A.; Guo, B.
2015-12-01
Carbon capture and sequestration (CCS) is being considered as an approach to mitigate anthropogenic CO2 emissions from large stationary sources such as coal fired power plants and natural gas processing plants. Computer modeling is an essential tool for site design and operational planning as it allows prediction of the pressure response as well as the migration of both CO2 and brine in the subsurface. Many processes, such as buoyancy, hysteresis, geomechanics and geochemistry, can have important impacts on the system. While all of the processes can be taken into account simultaneously, the resulting models are computationally very expensive and require large numbers of parameters which are often uncertain or unknown. In many cases of practical interest, the computational and data requirements can be reduced by choosing a smaller domain and/or by neglecting or simplifying certain processes. This leads to a series of models with different complexity, ranging from coupled multi-physics, multi-phase three-dimensional models to semi-analytical single-phase models. Under certain conditions the three-dimensional equations can be integrated in the vertical direction, leading to a suite of two-dimensional multi-phase models, termed vertically-integrated models. These models are either solved numerically or simplified further (e.g., assumption of vertical equilibrium) to allow analytical or semi-analytical solutions. This presentation focuses on how different vertically-integrated models have been applied to the simulation of CO2 and brine migration during CCS projects. Several example sites, such as the Illinois Basin and the Wabamun Lake region of the Alberta Basin, are discussed to show how vertically-integrated models can be used to gain understanding of CCS operations.
Numerical modelling of carbonate platforms and reefs: approaches and opportunities
Energy Technology Data Exchange (ETDEWEB)
Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology
2001-07-01
This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common
Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach.
Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong
2015-01-01
Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality.
A tantalum strength model using a multiscale approach: version 2
Energy Technology Data Exchange (ETDEWEB)
Becker, R; Arsenlis, A; Hommes, G; Marian, J; Rhee, M; Yang, L H
2009-09-21
A continuum strength model for tantalum was developed in 2007 using a multiscale approach. This was our first attempt at connecting simulation results from atomistic to continuum length scales, and much was learned that we were not able to incorporate into the model at that time. The tantalum model described in this report represents a second cut at pulling together multiscale simulation results into a continuum model. Insight gained in creating previous multiscale models for tantalum and vanadium was used to guide the model construction and functional relations for the present model. While the basic approach follows that of the vanadium model, there are significant departures. Some of the recommendations from the vanadium report were followed, but not all. Results from several new analysis techniques have not yet been incorporated due to technical difficulties. Molecular dynamics simulations of single dislocation motion at several temperatures suggested that the thermal activation barrier was temperature dependent. This dependency required additional temperature functions be included within the assumed Arrhenius relation. The combination of temperature dependent functions created a complex model with a non unique parameterization and extra model constants. The added complexity had no tangible benefits. The recommendation was to abandon the strict Arrhenius form and create a simpler curve fit to the molecular dynamics data for shear stress versus dislocation velocity. Functions relating dislocation velocity and applied shear stress were constructed vor vanadium for both edge and screw dislocations. However, an attempt to formulate a robust continuum constitutive model for vanadium using both dislocation populations was unsuccessful; the level of coupling achieved was inadequate to constrain the dislocation evolution properly. Since the behavior of BCC materials is typically assumed to be dominated by screw dislocations, the constitutive relations were ultimately
Modeling the cometary environment using a fluid approach
Shou, Yinsi
Comets are believed to have preserved the building material of the early solar system and to hold clues to the origin of life on Earth. Abundant remote observations of comets by telescopes and the in-situ measurements by a handful of space missions reveal that the cometary environments are complicated by various physical and chemical processes among the neutral gases and dust grains released from comets, cometary ions, and the solar wind in the interplanetary space. Therefore, physics-based numerical models are in demand to interpret the observational data and to deepen our understanding of the cometary environment. In this thesis, three models using a fluid approach, which include important physical and chemical processes underlying the cometary environment, have been developed to study the plasma, neutral gas, and the dust grains, respectively. Although models based on the fluid approach have limitations in capturing all of the correct physics for certain applications, especially for very low gas density environment, they are computationally much more efficient than alternatives. In the simulations of comet 67P/Churyumov-Gerasimenko at various heliocentric distances with a wide range of production rates, our multi-fluid cometary neutral gas model and multi-fluid cometary dust model have achieved comparable results to the Direct Simulation Monte Carlo (DSMC) model, which is based on a kinetic approach that is valid in all collisional regimes. Therefore, our model is a powerful alternative to the particle-based model, especially for some computationally intensive simulations. Capable of accounting for the varying heating efficiency under various physical conditions in a self-consistent way, the multi-fluid cometary neutral gas model is a good tool to study the dynamics of the cometary coma with different production rates and heliocentric distances. The modeled H2O expansion speeds reproduce the general trend and the speed's nonlinear dependencies of production rate
International energy market dynamics: a modelling approach. Tome 2
International Nuclear Information System (INIS)
Nachet, S.
1996-01-01
This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, ect. The model build is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil and natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends
International energy market dynamics: a modelling approach. Tome 1
International Nuclear Information System (INIS)
Nachet, S.
1996-01-01
This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, etc. The model built is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
New business models for electric cars-A holistic approach
International Nuclear Information System (INIS)
Kley, Fabian; Lerch, Christian; Dallinger, David
2011-01-01
Climate change and global resource shortages have led to rethinking traditional individual mobility services based on combustion engines. As the consequence of technological improvements, the first electric vehicles are now being introduced and greater market penetration can be expected. But any wider implementation of battery-powered electrical propulsion systems in the future will give rise to new challenges for both the traditional automotive industry and other new players, e.g. battery manufacturers, the power supply industry and other service providers. Different application cases of electric vehicles are currently being discussed which means that numerous business models could emerge, leading to new shares in value creation and involving new players. Consequently, individual stakeholders are uncertain about which business models are really effective with regard to targeting a profitable overall concept. Therefore, this paper aims to define a holistic approach to developing business models for electric mobility, which analyzes the system as a whole on the one hand and provides decision support for affected enterprises on the other. To do so, the basic elements of electric mobility are considered and topical approaches to business models for various stakeholders are discussed. The paper concludes by presenting a systemic instrument for business models based on morphological methods. - Highlights: → We present a systemic instrument to analyze business models for electric vehicles. → Provide decision support for an enterprises dealing with electric vehicle innovations. → Combine business aspects of the triad between vehicles concepts, infrastructure as well as system integration. → In the market, activities in all domains have been initiated, but often with undefined or unclear structures.
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc
Leader communication approaches and patient safety: An integrated model.
Mattson, Malin; Hellgren, Johnny; Göransson, Sara
2015-06-01
Leader communication is known to influence a number of employee behaviors. When it comes to the relationship between leader communication and safety, the evidence is more scarce and ambiguous. The aim of the present study is to investigate whether and in what way leader communication relates to safety outcomes. The study examines two leader communication approaches: leader safety priority communication and feedback to subordinates. These approaches were assumed to affect safety outcomes via different employee behaviors. Questionnaire data, collected from 221 employees at two hospital wards, were analyzed using structural equation modeling. The two examined communication approaches were both positively related to safety outcomes, although leader safety priority communication was mediated by employee compliance and feedback communication by organizational citizenship behaviors. The findings suggest that leader communication plays a vital role in improving organizational and patient safety and that different communication approaches seem to positively affect different but equally essential employee safety behaviors. The results highlights the necessity for leaders to engage in one-way communication of safety values as well as in more relational feedback communication with their subordinates in order to enhance patient safety. Copyright © 2015 Elsevier Ltd. and National Safety Council. Published by Elsevier Ltd. All rights reserved.
A probabilistic approach to the drag-based model
Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco
2018-02-01
The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.
Novel approach for modeling separation forces between deformable bodies.
Mahvash, Mohsen
2006-07-01
Many minimally invasive surgeries (MISs) involve removing whole organs or tumors that are connected to other organs. Development of haptic simulators that reproduce separation forces between organs can help surgeons learn MIS procedures. Powerful computational approaches such as finite-element methods generally cannot simulate separation in real time. This paper presents a novel approach for real-time computation of separation forces between deformable bodies. Separation occurs either due to fracture when a tool applies extensive forces to the bodies or due to evaporation when a laser beam burns the connection between the bodies. The separation forces are generated online from precalculated force-displacement functions that depend on the local adhesion/separation states between bodies. The precalculated functions are accurately synthesized from a large number of force responses obtained through either offline simulation, measurement, or analytical approximation during the preprocessing step. The approach does not require online computation of force versus global deformation to obtain separation forces. Only online interpolation of precalculated responses is required. The state of adhesion/separation during fracture and evaporation are updated by computationally simple models, which are derived based on the law of conservation of energy. An implementation of the approach for the haptic simulation of the removal of a diseased organ is presented, showing the fidelity of the simulation.
A chain reaction approach to modelling gene pathways.
Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen
2012-08-01
BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By
Modeling of problems of projection: A non-countercyclic approach
Directory of Open Access Journals (Sweden)
Jason Ginsburg
2016-06-01
Full Text Available This paper describes a computational implementation of the recent Problems of Projection (POP approach to the study of language (Chomsky 2013; 2015. While adopting the basic proposals of POP, notably with respect to how labeling occurs, we a attempt to formalize the basic proposals of POP, and b develop new proposals that overcome some problems with POP that arise with respect to cyclicity, labeling, and wh-movement operations. We show how this approach accounts for simple declarative sentences, ECM constructions, and constructions that involve long-distance movement of a wh-phrase (including the that-trace effect. We implemented these proposals with a computer model that automatically constructs step-by-step derivations of target sentences, thus making it possible to verify that these proposals work.
Model predictive control approach for a CPAP-device
Directory of Open Access Journals (Sweden)
Scheel Mathias
2017-09-01
Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.
Optimizing nitrogen fertilizer use: Current approaches and simulation models
International Nuclear Information System (INIS)
Baethgen, W.E.
2000-01-01
Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)
Anomalous superconductivity in the tJ model; moment approach
DEFF Research Database (Denmark)
Sørensen, Mads Peter; Rodriguez-Nunez, J.J.
1997-01-01
By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...
CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach
DEFF Research Database (Denmark)
Sabaka, T.; Olsen, Nils; Tyler, Robert
2014-01-01
We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Leve...
A multi-region approach to modeling subsurface flow
International Nuclear Information System (INIS)
Gwo, J.P.; Yeh, G.T.; Wilson, G.V.
1990-01-01
In this approach the media are assumed to contain n pore-regions at any physical point. Each region has different pore size and hydrologic parameters. Inter-region exchange is approximated by a linear transfer process. Based on the mass balance principle, a system of equations governing the flow and mass exchange in structured or aggregated soils is derived. This system of equations is coupled through linear transfer terms representing the interchange among different pore regions. A numerical MUlti-Region Flow (MURF) model, using the Galerkin finite element method to facilitate the treatment of local and field-scale heterogeneities, is developed to solve the system of equations. A sparse matrix solver is used to solve the resulting matrix equation, which makes the application of MURF to large field problems feasible in terms of CPU time and storage limitations. MURF is first verified by applying it to a ponding infiltration problem over a hill slope, which is a single-region problem and has been previously simulated by a single-region model. Very good agreement is obtained between the results from the two different models. The MURF code is thus partially verified. It is then applied to a two-region fractured medium to investigate the effects of multi-region approach on the flow field. The results are comparable to that obtained by other investigators. (Author) (15 refs., 6 figs., tab.)
Modelling public risk evaluation of natural hazards: a conceptual approach
Plattner, Th.
2005-04-01
In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.
Spintronic device modeling and evaluation using modular approach to spintronics
Ganguly, Samiran
Spintronics technology finds itself in an exciting stage today. Riding on the backs of rapid growth and impressive advances in materials and phenomena, it has started to make headway in the memory industry as solid state magnetic memories (STT-MRAM) and is considered a possible candidate to replace the CMOS when its scaling reaches physical limits. It is necessary to bring all these advances together in a coherent fashion to explore and evaluate the potential of spintronic devices. This work creates a framework for this exploration and evaluation based on Modular Approach to Spintronics, which encapsulate the physics of transport of charge and spin through materials and the phenomenology of magnetic dynamics and interaction in benchmarked elemental modules. These modules can then be combined together to form spin-circuit models of complex spintronic devices and structures which can be simulated using SPICE like circuit simulators. In this work we demonstrate how Modular Approach to Spintronics can be used to build spin-circuit models of functional spintronic devices of all types: memory, logic, and oscillators. We then show how Modular Approach to Spintronics can help identify critical factors behind static and dynamic dissipation in spintronic devices and provide remedies by exploring the use of various alternative materials and phenomena. Lastly, we show the use of Modular Approach to Spintronics in exploring new paradigms of computing enabled by the inherent physics of spintronic devices. We hope that this work will encourage more research and experiments that will establish spintronics as a viable technology for continued advancement of electronics.
A parsimonious approach to modeling animal movement data.
Directory of Open Access Journals (Sweden)
Yann Tremblay
Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.
Ensembles modeling approach to study Climate Change impacts on Wheat
Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart
2017-04-01
Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.
Vector-model-supported approach in prostate plan optimization
International Nuclear Information System (INIS)
Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi
2017-01-01
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration
Interdependence: a new model for the global approach to disability
Directory of Open Access Journals (Sweden)
Nathan Grills
2015-01-01
Full Text Available Disability affects over 1 billion people and the WHO estimates that over 80% of individuals with disability live in low and middle income countries, where access to health and social services to respond to disability are limited 1. Compounding this poverty is that medical and technological approaches to disability, however needed, are usually very expensive. Yet, much can be done at low cost to increase the wellbeing of people with disability, and the church and Christians need to take a lead. The WHO’s definition of disability highlights the challenge to us in global health. It has been defined by the WHO as “the interaction between a person’s impairments and the attitudinal and environmental barriers that hinder their full and effective participation in society on an equal basis with others” 2. This understanding of disability requires us to go beyond mere healing and towards inclusion in our response to chronic diseases and disability. This is known as the social model and requires societal attitudinal change and modification of disabling environments in order to facilitate those with disability to be included in our community and churches. These are good responses but the church needs to consider alternative models to those that are currently promoted which strive for independence as the ultimate endpoint. In this paper I introduce some disability-related articles in this issue and outline an approach that goes beyond the Social Model towards an Interdependence Model which I think is a more Biblical model of disability and one which we Christians and churches in global health should consider. This model would go beyond changing society to accommodate for people with disabilities towards acknowledging they play an important part in our community and indeed in our church. We need those people with disability to contribute, love and bless those with and without disabilities. And of course those with disability need the love, care and
A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION
Directory of Open Access Journals (Sweden)
H. Ju
2012-07-01
Full Text Available Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model, which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed. The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform, Harris Corners, PDF (Probability Density Function feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus as main components. After that, bridge ROI (Region Of Interest from LiDAR data domain is projected to the aerial image domain as the ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1st order polynomial function is used; whereas, for the curved bridge, 2nd order polynomial function is used to fit those endpoints of Hough linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can be further merged with DTM (Digital Terrain Model derived from LiDAR data to obtain the precise DSM (Digital Surface Model. Such a precise DSM can be used to improve the orthophoto product quality.
A new approach for modeling dry deposition velocity of particles
Giardina, M.; Buffa, P.
2018-05-01
The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.
A coordination chemistry approach for modeling trace element adsorption
International Nuclear Information System (INIS)
Bourg, A.C.M.
1986-01-01
The traditional distribution coefficient, Kd, is highly dependent on the water chemistry and the surface properties of the geological system being studied and is therefore quite inappropriate for use in predictive models. Adsorption, one of the many processes included in Kd values, is described here using a coordination chemistry approach. The concept of adsorption of cationic trace elements by solid hydrous oxides can be applied to natural solids. The adsorption process is thus understood in terms of a classical complexation leading to the formation of surface (heterogeneous) ligands. Applications of this concept to some freshwater, estuarine and marine environments are discussed. (author)
Stability of Rotor Systems: A Complex Modelling Approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1996-01-01
with the results of the classical approach using Rayleighquotients. Several rotor systems are tested: a simple Laval rotor, a Laval rotor with additional elasticity and damping in thr bearings, and a number of rotor systems with complex symmetric 4x4 randomly generated matrices.......A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...
Modelling and simulating retail management practices: a first approach
Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris
2010-01-01
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...
Data mining approach to model the diagnostic service management.
Lee, Sun-Mi; Lee, Ae-Kyung; Park, Il-Su
2006-01-01
Korea has National Health Insurance Program operated by the government-owned National Health Insurance Corporation, and diagnostic services are provided every two year for the insured and their family members. Developing a customer relationship management (CRM) system using data mining technology would be useful to improve the performance of diagnostic service programs. Under these circumstances, this study developed a model for diagnostic service management taking into account the characteristics of subjects using a data mining approach. This study could be further used to develop an automated CRM system contributing to the increase in the rate of receiving diagnostic services.
On quantum approach to modeling of plasmon photovoltaic effect
DEFF Research Database (Denmark)
Kluczyk, Katarzyna; David, Christin; Jacak, Witold Aleksander
2017-01-01
.g., upon commercial COMSOL software system). Both approaches are essentially classical ones and neglect quantum particularities related to plasmon excitations in metallic components. We demonstrate that these quantum plasmon effects are of crucial importance especially in theoretical simulations of plasmon...... to the semiconductor solar cell mediated by surface plasmons in metallic nanoparticles deposited on the top of the battery. In addition, short-ranged electron-electron interaction in metals is discussed in the framework of the semiclassical hydrodynamic model. The significance of the related quantum corrections...
New Approaches in Reuseable Booster System Life Cycle Cost Modeling
Zapata, Edgar
2013-01-01
This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model
New Approaches in Reusable Booster System Life Cycle Cost Modeling
Zapata, Edgar
2013-01-01
This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model
Worldline approach to the Grosse-Wulkenhaar model
Viñas, Sebastián Franchino; Pisani, Pablo
2014-11-01
We apply the worldline formalism to the Grosse-Wulkenhaar model and obtain an expression for the one-loop effective action which provides an efficient way for computing Schwinger functions in this theory. Using this expression we obtain the quantum corrections to the effective background and the β-functions, which are known to vanish at the self-dual point. The case of degenerate noncommutativity is also considered. Our main result can be straightforwardly applied to any polynomial self-interaction of the scalar field and we consider that the worldline approach could be useful for studying effective actions of noncommutative gauge fields as well as in other non-local models or in higher-derivative field theories.
Comparison of different approaches of modelling in a masonry building
Saba, M.; Meloni, D.
2017-12-01
The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.
A Data Mining Approach to Modelling of Water Supply Assets
DEFF Research Database (Denmark)
Babovic, V.; Drecourt, J.; Keijzer, M.
2002-01-01
The economic and social costs associated with pipe bursts and associated leakage problems in modern water supply systems are rapidly rising to unacceptable high levels. Pipe burst risks depend on a number of factors which are extremely difficult to characterise. A part of the problem is that water...... with the choice of pipes to be replaced, the outlined approach opens completely new avenues in asset modelling. The condition of an asset such as a water supply network deteriorates with age. With reliable risk models, addressing the evolution of risk with aging asset, it is now possible to plan optimal...... supply assets are mainly situated underground, and therefore not visible and under the influence of various highly unpredictable forces. This paper proposes the use of advanced data mining methods in order to determine the risks of pipe bursts. For example, analysis of the database of already occurred...
The Use of Modeling Approach for Teaching Exponential Functions
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
Modelling hybrid stars in quark-hadron approaches
Energy Technology Data Exchange (ETDEWEB)
Schramm, S. [FIAS, Frankfurt am Main (Germany); Dexheimer, V. [Kent State University, Department of Physics, Kent, OH (United States); Negreiros, R. [Federal Fluminense University, Gragoata, Niteroi (Brazil)
2016-01-15
The density in the core of neutron stars can reach values of about 5 to 10 times nuclear matter saturation density. It is, therefore, a natural assumption that hadrons may have dissolved into quarks under such conditions, forming a hybrid star. This star will have an outer region of hadronic matter and a core of quark matter or even a mixed state of hadrons and quarks. In order to investigate such phases, we discuss different model approaches that can be used in the study of compact stars as well as being applicable to a wider range of temperatures and densities. One major model ingredient, the role of quark interactions in the stability of massive hybrid stars is discussed. In this context, possible conflicts with lattice QCD simulations are investigated. (orig.)
A Data Mining Approach to Modelling of Water Supply Assets
DEFF Research Database (Denmark)
Babovic, V.; Drecourt, J.; Keijzer, M.
2002-01-01
The economic and social costs associated with pipe bursts and associated leakage problems in modern water supply systems are rapidly rising to unacceptable high levels. Pipe burst risks depend on a number of factors which are extremely difficult to characterise. A part of the problem is that water...... supply assets are mainly situated underground, and therefore not visible and under the influence of various highly unpredictable forces. This paper proposes the use of advanced data mining methods in order to determine the risks of pipe bursts. For example, analysis of the database of already occurred...... with the choice of pipes to be replaced, the outlined approach opens completely new avenues in asset modelling. The condition of an asset such as a water supply network deteriorates with age. With reliable risk models, addressing the evolution of risk with aging asset, it is now possible to plan optimal...
Global GPS Ionospheric Modelling Using Spherical Harmonic Expansion Approach
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2010-12-01
Full Text Available In this study, we developed a global ionosphere model based on measurements from a worldwide network of global positioning system (GPS. The total number of the international GPS reference stations for development of ionospheric model is about 100 and the spherical harmonic expansion approach as a mathematical method was used. In order to produce the ionospheric total electron content (TEC based on grid form, we defined spatial resolution of 2.0 degree and 5.0 degree in latitude and longitude, respectively. Two-dimensional TEC maps were constructed within the interval of one hour, and have a high temporal resolution compared to global ionosphere maps which are produced by several analysis centers. As a result, we could detect the sudden increase of TEC by processing GPS observables on 29 October, 2003 when the massive solar flare took place.
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Simplistic approach for 2D grown-in microdefect modeling
Energy Technology Data Exchange (ETDEWEB)
Prostomolotov, Anatoly; Verezub, Nataliya [Institute for Problems in Mechanics, Russian Academy of Sciences, Moscow (Russian Federation)
2009-08-15
In the present paper the analysis of cooling conditions influence on microdefect formation in Si single crystal was carried out on the basis of an analytical formulation for crystal temperature field jointly with developed two-dimensional (2D) models of microdefect formation. The new mathematical model is applied for calculations of vacancy microdefect formation, in which the 2D vacancy migration process is taken into account and the approached calculation algorithm is offered, which is not requiring the data storage for whole defect growth pre-history. The calculated results are discussed for conditions of Cz silicon single crystal growing. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
A discrete element modelling approach for block impacts on trees
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input
An Approach to Model Based Testing of Multiagent Systems
Directory of Open Access Journals (Sweden)
Shafiq Ur Rehman
2015-01-01
Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.
THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?
Directory of Open Access Journals (Sweden)
Rory James Ridley-Duff
2015-07-01
Full Text Available This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs, social and responsible businesses (SRBs and charitable trading activities (CTAs. The ethics that guide each approach are examined to provide a conceptual framework for examining FairShares as a case study. In the second part, findings are scrutinised in terms of the ethical values and principles that are activated when FairShares is applied to practice. The paper contributes to knowledge by giving an example of the way OpenSource technology (Loomio has been used to translate 'espoused theories' into 'theories in use' to advance social enterprise development. The review of FairShares using the conceptual framework suggests there is a fourth approach based on multi-stakeholder co-operation to create 'associative democracy' in the workplace.
Different approach to the modeling of nonfree particle diffusion
Buhl, Niels
2018-03-01
A new approach to the modeling of nonfree particle diffusion is presented. The approach uses a general setup based on geometric graphs (networks of curves), which means that particle diffusion in anything from arrays of barriers and pore networks to general geometric domains can be considered and that the (free random walk) central limit theorem can be generalized to cover also the nonfree case. The latter gives rise to a continuum-limit description of the diffusive motion where the effect of partially absorbing barriers is accounted for in a natural and non-Markovian way that, in contrast to the traditional approach, quantifies the absorptivity of a barrier in terms of a dimensionless parameter in the range 0 to 1. The generalized theorem gives two general analytic expressions for the continuum-limit propagator: an infinite sum of Gaussians and an infinite sum of plane waves. These expressions entail the known method-of-images and Laplace eigenfunction expansions as special cases and show how the presence of partially absorbing barriers can lead to phenomena such as line splitting and band gap formation in the plane wave wave-number spectrum.
Engineering approach to model and compute electric power markets settlements
International Nuclear Information System (INIS)
Kumar, J.; Petrov, V.
2006-01-01
Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs
Directory of Open Access Journals (Sweden)
Merler Stefano
2010-06-01
Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age
Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro
2010-06-29
In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are
Thin inclusion approach for modelling of heterogeneous conducting materials
Lavrov, Nikolay; Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel
Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials.
A Modeling Approach for Plastic-Metal Laser Direct Joining
Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca
2017-09-01
Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.
Right approach to 3D modeling using CAD tools
Baddam, Mounica Reddy
The thesis provides a step-by-step methodology to enable an instructor dealing with CAD tools to optimally guide his/her students through an understandable 3D modeling approach which will not only enhance their knowledge about the tool's usage but also enable them to achieve their desired result in comparatively lesser time. In the known practical field, there is particularly very little information available to apply CAD skills to formal beginners' training sessions. Additionally, advent of new software in 3D domain cumulates updating into a more difficult task. Keeping up to the industry's advanced requirements emphasizes the importance of more skilled hands in the field of CAD development, rather than just prioritizing manufacturing in terms of complex software features. The thesis analyses different 3D modeling approaches specified to the varieties of CAD tools currently available in the market. Utilizing performance-time databases, learning curves have been generated to measure their performance time, feature count etc. Based on the results, improvement parameters have also been provided for (Asperl, 2005).
Integration of measurement data in the comprehensive modelling approach
Sieber, I.; Rübenach, O.
2013-09-01
Efficient and reliable optical design requires knowledge of the production chain, the materials used, and the environmental circumstances in the field of operation. This is realized in the comprehensive modelling approach consisting of three steps: • Design for manufacturing, i.e. the model must be adjusted to the process chain. Knowledge of design rules is required. • Robust design, i.e. optimization of the functional design with the objective of a compensation of the tolerance influences on the system's performance. Knowledge of the tolerances of the individual process steps is required. • Reliable design with respect to environmental and operational effects, respectively. Coupling of an optical and mechanical simulation tool is required to form the optical simulation environment. The availability of process knowledge such as e.g. design rules and manufacturing tolerances is ensured by coupling of the optical simulation environment with a process knowledge database. Integration of measured surface data in this simulation environment enables a realistic simulation and analysis of real, manufactured optics. This approach allows e.g. for the evaluation of replication methods such as precision molding or injection molding against high-precision manufacturing methods, e.g. diamond turning.
Replacement model of city bus: A dynamic programming approach
Arifin, Dadang; Yusuf, Edhi
2017-06-01
This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.
Realistic Matematic Approach through Numbered Head Together Learning Model
Sugihatno, A. C. M. S.; Budiyono; Slamet, I.
2017-09-01
Recently, the teaching process which is conducted based on teacher center affect the students interaction in the class. It causes students become less interest to participate. That is why teachers should be more creative in designing learning using other types of cooperative learning model. Therefore, this research is aimed to implement NHT with RMA in the teaching process. We utilize NHT since it is a variant of group discussion whose aim is giving a chance to the students to share their ideas related to the teacher’s question. By using NHT in the class, a teacher can give a better understanding about the material which is given with the help of Realistic Mathematics Approach (RMA) which known for its real problem contex. Meanwhile, the researcher assumes instead of selecting teaching model, Adversity Quotient (AQ) of student also influences students’ achievement. This research used the quasi experimental research. The samples is 60 students in junior high school, it was taken by using the stratified cluster random sampling technique. The results show NHT-RMA gives a better learning achievement of mathematics than direct teaching model and NHT-RMA teaching model with categorized as high AQ show different learning achievement from the students with categorized as moderate and low AQ.
A modelling approach to designing microstructures in thermal barrier coatings
International Nuclear Information System (INIS)
Gupta, M.; Nylen, P.; Wigren, J.
2013-01-01
Thermomechanical properties of Thermal Barrier Coatings (TBCs) are strongly influenced by coating defects, such as delaminations and pores, thus making it essential to have a fundamental understanding of microstructure-property relationships in TBCs to produce a desired coating. Object-Oriented Finite element analysis (OOF) has been shown previously as an effective tool for evaluating thermal and mechanical material behaviour, as this method is capable of incorporating the inherent material microstructure as input to the model. In this work, OOF was used to predict the thermal conductivity and effective Young's modulus of TBC topcoats. A Design of Experiments (DoE) was conducted by varying selected parameters for spraying Yttria-Stabilised Zirconia (YSZ) topcoat. The microstructure was assessed with SEM, and image analysis was used to characterize the porosity content. The relationships between microstructural features and properties predicted by modelling are discussed. The microstructural features having the most beneficial effect on properties were sprayed with a different spray gun so as to verify the results obtained from modelling. Characterisation of the coatings included microstructure evaluation, thermal conductivity and lifetime measurements. The modelling approach in combination with experiments undertaken in this study was shown to be an effective way to achieve coatings with optimised thermo-mechanical properties.
A Modeling Approach for Earthquake-Ionosphere Coupling
Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Savastano, G.; Mannucci, A. J.
2017-12-01
We present a newly developed modeling approach for the earthquake-ionosphere coupling process, which extends the capability of Wave Perturbation - Global Ionosphere-Thermosphere Model (WP-GITM) developed originally for tsunami-ionosphere coupling. The new WP-GITM represents an earthquake as a point source at its epicenter, and takes the ground vertical velocity data from seismic measurements as input. The model then solves the neutral density, velocity, and temperature perturbations generated by spherical acoustic-gravity waves and the resulting perturbations in ions and electrons. We apply the model to simulate the near-field ionospheric disturbances during two earthquake events with different local times including the 2011 Tohoku-Oki (local afternoon) and the 2015 Illapel events (local evening). To validate the results, we retrieve receiver-to-satellite total electron content (TEC) perturbations from the simulations and compare them to the corresponding slant TEC perturbations from Global Positioning System (GPS) TEC observations. We find good agreement on magnitudes and arrival times between the simulations and observations and discuss directions of future research.
An interdisciplinary approach to modeling tritium transfer into the environment
International Nuclear Information System (INIS)
Galeriu, D; Melintescu, A.
2005-01-01
More robust radiological assessment models are required to support the safety case for the nuclear industry. Heavy water reactors, fuel processing plants, radiopharmaceutical factories, and the future fusion reactor, all have large tritium loads. While of low probability, large accidental tritium releases cannot be ignored. For Romania that uses CANDU600 for nuclear energy, tritium is the national radionuclide. Tritium enters directly into the life cycle in many physicochemical forms. Tritiated water (HTO) is leaked from most nuclear installations but is partially converted into organically bound tritium (OBT) through plant and animal metabolic processes. Hydrogen and carbon are elemental components of major nutrients and animal tissues and their radioisotopes must be modeled differently from those of most other radionuclides. Tritium transfer from atmosphere to plant and conversion into organically bound tritium strongly depend on plant characteristics, season, and weather conditions. In order to cope with this large variability and avoid expensive calibration experiments, we developed a model using knowledge of plant physiology, agrometeorology, soil sciences, hydrology, and climatology. The transfer of tritiated water to plant was modeled with resistance approach including sparse canopy. The canopy resistance was modeled using the Jarvis-Calvet approach modified in order to make direct use of the canopy photosynthesis rate. The crop growth model WOFOST was used for photosynthesis rate both for canopy resistance and formation of organically bound tritium. Using this formalism, the tritium transfer parameters were directly linked to processes and parameters known from agricultural sciences. Model predictions for tritium in wheat were close to a factor two, according to experimental data without any calibration. The model was also tested on rice and soybean and can be applied for various plants and environmental conditions. For sparse canopy, the model used coupled
Predicting future glacial lakes in Austria using different modelling approaches
Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus
2017-04-01
Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers
Hauduc, H; Rieger, L; Takács, I; Héduit, A; Vanrolleghem, P A; Gillot, S
2010-01-01
The quality of simulation results can be significantly affected by errors in the published model (typing, inconsistencies, gaps or conceptual errors) and/or in the underlying numerical model description. Seven of the most commonly used activated sludge models have been investigated to point out the typing errors, inconsistencies and gaps in the model publications: ASM1; ASM2d; ASM3; ASM3 + Bio-P; ASM2d + TUD; New General; UCTPHO+. A systematic approach to verify models by tracking typing errors and inconsistencies in model development and software implementation is proposed. Then, stoichiometry and kinetic rate expressions are checked for each model and the errors found are reported in detail. An attached spreadsheet (see http://www.iwaponline.com/wst/06104/0898.pdf) provides corrected matrices with the calculations of all stoichiometric coefficients for the discussed biokinetic models and gives an example of proper continuity checks.
Modelling Approach to Assess Future Agricultural Water Demand
Spano, D.; Mancosu, N.; Orang, M.; Sarreshteh, S.; Snyder, R. L.
2013-12-01
The combination of long-term climate changes (e.g., warmer average temperatures) and extremes events (e.g., droughts) can have decisive impacts on water demand, with further implications on the ecosystems. In countries already affected by water scarcity, water management problems are becoming increasingly serious. The sustainable management of available water resources at the global, regional, and site-specific level is necessary. In agriculture, the first step is to compute how much water is needed by crops in regards to climate conditions. Modelling approach can be a way to compute crop water requirement (CWR). In this study, the improved version of the SIMETAW model was used. The model is a user friendly soil water balance model, developed by the University of California, Davis, the California Department of Water Resource, and the University of Sassari. The SIMETAW# model assesses CWR and generates hypothetical irrigation scheduling for a wide range of irrigated crops experiencing full, deficit, or no irrigation. The model computes the evapotranspiration of the applied water (ETaw), which is the sum of the net amount of irrigation water needed to match losses due to the crop evapotranspiration (ETc). ETaw is determined by first computing reference evapotranspiration (ETo) using the daily standardized Reference Evapotranspiration equation. ETaw is computed as ETaw = CETc - CEr, where CETc and CE are the cumulative total crop ET and effective rainfall values, respectively. Crop evapotranspiration is estimated as ETc = ETo x Kc, where Kc is the corrected midseason tabular crop coefficient, adjusted for climate conditions. The net irrigation amounts are determined from a daily soil water balance, using an integrated approach that considers soil and crop management information, and the daily ETc estimates. Using input information on irrigation system distribution uniformity and runoff, when appropriate, the model estimates the applied water to the low quarter of the
Cellular communication and “non-targeted effects”: Modelling approaches
Ballarini, Francesca; Facoetti, Angelica; Mariotti, Luca; Nano, Rosanna; Ottolenghi, Andrea
2009-10-01
During the last decade, a large number of experimental studies on the so-called "non-targeted effects", in particular bystander effects, outlined that cellular communication plays a significant role in the pathways leading to radiobiological damage. Although it is known that two main types of cellular communication (i.e. via gap junctions and/or molecular messengers diffusing in the extra-cellular environment, such as cytokines, NO etc.) play a major role, it is of utmost importance to better understand the underlying mechanisms, and how such mechanisms can be modulated by ionizing radiation. Though the "final" goal is of course to elucidate the in vivo scenario, in the meanwhile also in vitro studies can provide useful insights. In the present paper we will discuss key issues on the mechanisms underlying non-targeted effects and cell communication, for which theoretical models and simulation codes can be of great help. In this framework, we will present in detail three literature models, as well as an approach under development at the University of Pavia. More specifically, we will first focus on a version of the "State-Vector Model" including bystander-induced apoptosis of initiated cells, which was successfully fitted to in vitro data on neoplastic transformation supporting the hypothesis of a protective bystander effect mediated by apoptosis. The second analyzed model, focusing on the kinetics of bystander effects in 3D tissues, was successfully fitted to data on bystander damage in an artificial 3D skin system, indicating a signal range of the order of 0.7-1 mm. A third model for bystander effect, taking into account of spatial location, cell killing and repopulation, showed dose-response curves increasing approximately linearly at low dose rates but quickly flattening out for higher dose rates, also predicting an effect augmentation following dose fractionation. Concerning the Pavia approach, which can model the release, diffusion and depletion/degradation of
Probabilistic model-based approach for heart beat detection.
Chen, Hugh; Erol, Yusuf; Shen, Eric; Russell, Stuart
2016-09-01
Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an 88.8% rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to deleterious effects that manifest in a lower standard of care across clinics. This paper discusses a model-based probabilistic inference approach to estimate physiological variables at a detection level. We design a generative model that complies with a layman's understanding of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain. In order to evaluate our algorithm we look at the application of heart beat detection using four datasets provided by PhysioNet, a research resource for complex physiological signals, in the form of the PhysioNet 2014 Challenge set-p1 and set-p2, the MIT-BIH Polysomnographic Database, and the MGH/MF Waveform Database. On these data sets our algorithm performs on par with the other top six submissions to the PhysioNet 2014 challenge. The overall evaluation scores in terms of sensitivity and positive predictivity values obtained were as follows: set-p1 (99.72%), set-p2 (93.51%), MIT-BIH (99.66%), and MGH/MF (95.53%). These scores are based on the averaging of gross sensitivity, gross positive predictivity, average sensitivity, and average positive predictivity.
Ciottoli, Pietro P.
2017-08-14
A set of simplified chemical kinetics mechanisms for hybrid rocket applications using gaseous oxygen (GOX) and hydroxyl-terminated polybutadiene (HTPB) is proposed. The starting point is a 561-species, 2538-reactions, detailed chemical kinetics mechanism for hydrocarbon combustion. This mechanism is used for predictions of the oxidation of butadiene, the primary HTPB pyrolysis product. A Computational Singular Perturbation (CSP) based simplification strategy for non-premixed combustion is proposed. The simplification algorithm is fed with the steady-solutions of classical flamelet equations, these being representative of the non-premixed nature of the combustion processes characterizing a hybrid rocket combustion chamber. The adopted flamelet steady-state solutions are obtained employing pure butadiene and gaseous oxygen as fuel and oxidizer boundary conditions, respectively, for a range of imposed values of strain rate and background pressure. Three simplified chemical mechanisms, each comprising less than 20 species, are obtained for three different pressure values, 3, 17, and 36 bar, selected in accordance with an experimental test campaign of lab-scale hybrid rocket static firings. Finally, a comprehensive strategy is shown to provide simplified mechanisms capable of reproducing the main flame features in the whole pressure range considered.
Directory of Open Access Journals (Sweden)
Caroline Ghyoot
2017-07-01
Full Text Available Mixotrophy, i.e., the ability to combine phototrophy and phagotrophy in one organism, is now recognized to be widespread among photic-zone protists and to potentially modify the structure and functioning of planktonic ecosystems. However, few biogeochemical/ecological models explicitly include this mode of nutrition, owing to the large diversity of observed mixotrophic types, the few data allowing the parameterization of physiological processes, and the need to make the addition of mixotrophy into existing ecosystem models as simple as possible. We here propose and discuss a flexible model that depicts the main observed behaviors of mixotrophy in microplankton. A first model version describes constitutive mixotrophy (the organism photosynthesizes by use of its own chloroplasts. This model version offers two possible configurations, allowing the description of constitutive mixotrophs (CMs that favor either phototrophy or heterotrophy. A second version describes non-constitutive mixotrophy (the organism performs phototrophy by use of chloroplasts acquired from its prey. The model variants were described so as to be consistent with a plankton conceptualization in which the biomass is divided into separate components on the basis of their biochemical function (Shuter-approach; Shuter, 1979. The two model variants of mixotrophy can easily be implemented in ecological models that adopt the Shuter-approach, such as the MIRO model (Lancelot et al., 2005, and address the challenges associated with modeling mixotrophy.
Modeling healthcare authorization and claim submissions using the openEHR dual-model approach
2011-01-01
Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete
Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach
Directory of Open Access Journals (Sweden)
Taha Zaghdoudi
2016-08-01
Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.
Modeling healthcare authorization and claim submissions using the openEHR dual-model approach
Directory of Open Access Journals (Sweden)
Freire Sergio M
2011-10-01
Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing
BioModels: expanding horizons to include more modelling approaches and formats.
Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning
2018-01-04
BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Export of microplastics from land to sea. A modelling approach.
Siegfried, Max; Koelmans, Albert A; Besseling, Ellen; Kroeze, Carolien
2017-12-15
Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea. The model accounts for different types and sources of microplastics entering river systems via point sources. We combine information on these sources with information on sewage management and plastic retention during river transport for the largest European rivers. Sources of microplastics include personal care products, laundry, household dust and tyre and road wear particles (TRWP). Most of the modelled microplastics exported by rivers to seas are synthetic polymers from TRWP (42%) and plastic-based textiles abraded during laundry (29%). Smaller sources are synthetic polymers and plastic fibres in household dust (19%) and microbeads in personal care products (10%). Microplastic export differs largely among European rivers, as a result of differences in socio-economic development and technological status of sewage treatment facilities. About two-thirds of the microplastics modelled in this study flow into the Mediterranean and Black Sea. This can be explained by the relatively low microplastic removal efficiency of sewage treatment plants in the river basins draining into these two seas. Sewage treatment is generally more efficient in river basins draining into the North Sea, the Baltic Sea and the Atlantic Ocean. We use our model to explore future trends up to the year 2050. Our scenarios indicate that in the future river export of microplastics may increase in some river basins, but decrease in others. Remarkably, for many basins we calculate a reduction in river export of microplastics from point-sources, mainly due to an anticipated improvement in sewage treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid empirical--theoretical approach to modeling uranium adsorption
Energy Technology Data Exchange (ETDEWEB)
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W
2004-05-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.
Mobile phone use while driving: a hybrid modeling approach.
Márquez, Luis; Cantillo, Víctor; Arellana, Julián
2015-05-01
The analysis of the effects that mobile phone use produces while driving is a topic of great interest for the scientific community. There is consensus that using a mobile phone while driving increases the risk of exposure to traffic accidents. The purpose of this research is to evaluate the drivers' behavior when they decide whether or not to use a mobile phone while driving. For that, a hybrid modeling approach that integrates a choice model with the latent variable "risk perception" was used. It was found that workers and individuals with the highest education level are more prone to use a mobile phone while driving than others. Also, "risk perception" is higher among individuals who have been previously fined and people who have been in an accident or almost been in an accident. It was also found that the tendency to use mobile phones while driving increases when the traffic speed reduces, but it decreases when the fine increases. Even though the urgency of the phone call is the most important explanatory variable in the choice model, the cost of the fine is an important attribute in order to control mobile phone use while driving. Copyright © 2015 Elsevier Ltd. All rights reserved.
Implementation of a Novel Educational Modeling Approach for Cloud Computing
Directory of Open Access Journals (Sweden)
Sara Ouahabi
2014-12-01
Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.
Hybrid empirical--theoretical approach to modeling uranium adsorption
International Nuclear Information System (INIS)
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.
2004-01-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth
A New Approach to Model Verification, Falsification and Selection
Directory of Open Access Journals (Sweden)
Andrew J. Buck
2015-06-01
Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
Energy Technology Data Exchange (ETDEWEB)
Bard, D. [KIPAC, SLAC National Accelerator Laboratory, 2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Kratochvil, J. M. [Astrophysics and Cosmology Research Unit, University of KwaZulu-Natal, Westville, Durban 4000 (South Africa); Dawson, W., E-mail: djbard@slac.stanford.edu [Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States)
2016-03-10
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
A Dynamic Approach to Modeling Dependence Between Human Failure Events
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Laboratory
2015-09-01
In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.
Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach
Directory of Open Access Journals (Sweden)
Rankovic N.
2013-09-01
Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.
Parameter Estimation of Structural Equation Modeling Using Bayesian Approach
Directory of Open Access Journals (Sweden)
Dewi Kurnia Sari
2016-05-01
Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.
Energy Technology Data Exchange (ETDEWEB)
Ibanez-Llano, Cristina, E-mail: cristina.ibanez@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain); Rauzy, Antoine, E-mail: Antoine.RAUZY@3ds.co [Dassault Systemes, 10 rue Marcel Dassault CS 40501, 78946 Velizy Villacoublay, Cedex (France); Melendez, Enrique, E-mail: ema@csn.e [Consejo de Seguridad Nuclear (CSN), C/Justo Dorado 11, 28040 Madrid (Spain); Nieto, Francisco, E-mail: nieto@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain)
2010-12-15
Over the last two decades binary decision diagrams have been applied successfully to improve Boolean reliability models. Conversely to the classical approach based on the computation of the MCS, the BDD approach involves no approximation in the quantification of the model and is able to handle correctly negative logic. However, when models are sufficiently large and complex, as for example the ones coming from the PSA studies of the nuclear industry, it begins to be unfeasible to compute the BDD within a reasonable amount of time and computer memory. Therefore, simplification or reduction of the full model has to be considered in some way to adapt the application of the BDD technology to the assessment of such models in practice. This paper proposes a reduction process based on using information provided by the set of the most relevant minimal cutsets of the model in order to perform the reduction directly on it. This allows controlling the degree of reduction and therefore the impact of such simplification on the final quantification results. This reduction is integrated in an incremental procedure that is compatible with the dynamic generation of the event trees and therefore adaptable to the recent dynamic developments and extensions of the PSA studies. The proposed method has been applied to a real case study, and the results obtained confirm that the reduction enables the BDD computation while maintaining accuracy.
Validation of an employee satisfaction model: A structural equation model approach
Ophillia Ledimo; Nico Martins
2015-01-01
The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...
Using graph approach for managing connectivity in integrative landscape modelling
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open
Systems approaches to computational modeling of the oral microbiome
Directory of Open Access Journals (Sweden)
Dimiter V. Dimitrov
2013-07-01
Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders
Exploring regional economic convergence in Romania. A spatial modeling approach
Directory of Open Access Journals (Sweden)
Zizi GOSCHIN
2017-12-01
Full Text Available This paper explores spatial economic convergence in Romania, from the perspective of real GDP/capita, and examines how the shock of the recent economic crisis has affected the convergence process. Given the presence of spatial autocorrelation in the values of GDP per capita, we address the question of convergence in terms of both classic and spatial regression models, thus filling a gap in the Romanian literature on this topic. The empirical results seem to provide support for both absolute and relative beta divergence in GDP/capita, as well as sigma divergence among Romanian counties on the long run. This is the consequence of the two-speed regional development, with the capital region and some large cities thriving by attracting human capital and FDIs, while the lagging regions are systematically left behind. Failing to validate the neoclassical approach on convergence, our results rather support the new divergence theory based on polarization and centre-periphery inequality.
Benchmarking of computer codes and approaches for modeling exposure scenarios
International Nuclear Information System (INIS)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided
Informing Public Perceptions About Climate Change: A 'Mental Models' Approach.
Wong-Parodi, Gabrielle; Bruine de Bruin, Wändi
2017-10-01
As the specter of climate change looms on the horizon, people will face complex decisions about whether to support climate change policies and how to cope with climate change impacts on their lives. Without some grasp of the relevant science, they may find it hard to make informed decisions. Climate experts therefore face the ethical need to effectively communicate to non-expert audiences. Unfortunately, climate experts may inadvertently violate the maxims of effective communication, which require sharing communications that are truthful, brief, relevant, clear, and tested for effectiveness. Here, we discuss the 'mental models' approach towards developing communications, which aims to help experts to meet the maxims of effective communications, and to better inform the judgments and decisions of non-expert audiences.
Agents, Bayes, and Climatic Risks - a modular modelling approach
Haas, A.; Jaeger, C.
2005-08-01
When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.
Agents, Bayes, and Climatic Risks - a modular modelling approach
Directory of Open Access Journals (Sweden)
A. Haas
2005-01-01
Full Text Available When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.
A path integral approach to the Hodgkin-Huxley model
Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando
2017-11-01
To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.
Stabilization Approaches for Linear and Nonlinear Reduced Order Models
Rezaian, Elnaz; Wei, Mingjun
2017-11-01
It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems
National Research Council Canada - National Science Library
Qureshi, Zahid H
2008-01-01
.... This report provides a review of key traditional accident modelling approaches and their limitations, and describes new system-theoretic approaches to the modelling and analysis of accidents in safety-critical systems...
A Neural Model of Face Recognition: a Comprehensive Approach
Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina
Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.
Billieux, Joël; Philippot, Pierre; Schmid, Cécile; Maurage, Pierre; De Mol, Jan; Van der Linden, Martial
2015-01-01
Dysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance. The addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance. Copyright © 2014 John Wiley & Sons, Ltd.
A New Approach to Model Pitch Perception Using Sparse Coding.
Directory of Open Access Journals (Sweden)
Oded Barzelay
2017-01-01
Full Text Available Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC. It is the representation of pitch cues by a few spatiotemporal atoms (templates from among a large set of possible ones (a dictionary. The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.
Healthcare waste management: an interpretive structural modeling approach.
Thakur, Vikas; Anbanandam, Ramesh
2016-06-13
Purpose - The World Health Organization identified infectious healthcare waste as a threat to the environment and human health. India's current medical waste management system has limitations, which lead to ineffective and inefficient waste handling practices. Hence, the purpose of this paper is to: first, identify the important barriers that hinder India's healthcare waste management (HCWM) systems; second, classify operational, tactical and strategical issues to discuss the managerial implications at different management levels; and third, define all barriers into four quadrants depending upon their driving and dependence power. Design/methodology/approach - India's HCWM system barriers were identified through the literature, field surveys and brainstorming sessions. Interrelationships among all the barriers were analyzed using interpretive structural modeling (ISM). Fuzzy-Matrice d'Impacts Croisés Multiplication Appliquée á un Classement (MICMAC) analysis was used to classify HCWM barriers into four groups. Findings - In total, 25 HCWM system barriers were identified and placed in 12 different ISM model hierarchy levels. Fuzzy-MICMAC analysis placed eight barriers in the second quadrant, five in third and 12 in fourth quadrant to define their relative ISM model importance. Research limitations/implications - The study's main limitation is that all the barriers were identified through a field survey and barnstorming sessions conducted only in Uttarakhand, Northern State, India. The problems in implementing HCWM practices may differ with the region, hence, the current study needs to be replicated in different Indian states to define the waste disposal strategies for hospitals. Practical implications - The model will help hospital managers and Pollution Control Boards, to plan their resources accordingly and make policies, targeting key performance areas. Originality/value - The study is the first attempt to identify India's HCWM system barriers and prioritize
Teaching EFL Writing: An Approach Based on the Learner's Context Model
Lin, Zheng
2017-01-01
This study aims to examine qualitatively a new approach to teaching English as a foreign language (EFL) writing based on the learner's context model. It investigates the context model-based approach in class and identifies key characteristics of the approach delivered through a four-phase teaching and learning cycle. The model collects research…
Computational Approaches for Modeling the Multiphysics in Pultrusion Process
Directory of Open Access Journals (Sweden)
P. Carlone
2013-01-01
Full Text Available Pultrusion is a continuous manufacturing process used to produce high strength composite profiles with constant cross section. The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect the process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared for the evaluation of the temperature and the degree of cure distributions inside the heating die and at the postdie region. The transient stresses, distortions, and pull force are predicted using a sequentially coupled three-dimensional (3D thermochemical analysis together with a 2D plane strain mechanical analysis using the finite element method and compared with results obtained from a semianalytical approach.
A Systematic Approach to Modelling Change Processes in Construction Projects
Directory of Open Access Journals (Sweden)
Ibrahim Motawa
2012-11-01
Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.
Dynamical system approach to running Λ cosmological models
International Nuclear Information System (INIS)
Stachowski, Aleksander; Szydlowski, Marek
2016-01-01
We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g αβ ∇ α ∇ β φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)
Forecasting wind-driven wildfires using an inverse modelling approach
Directory of Open Access Journals (Sweden)
O. Rios
2014-06-01
Full Text Available A technology able to rapidly forecast wildfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the ongoing fire. This paper presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real-time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event in the order of 10 min for a spatial scale of 100 m. The greatest strengths of our method are lightness, speed and flexibility. We specifically tailor the forecast to be efficient and computationally cheap so it can be used in mobile systems for field deployment and operativeness. Thus, we put emphasis on producing a positive lead time and the means to maximise it.
Stability of rotor systems: A complex modelling approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1998-01-01
The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D......, G and N to be positive semidefinite. The complex setting is equivalent to twice as large a system of second order with real matrices. It is well known that rotor systems can exhibit instability for large angular velocities due to internal damping, unsymmetrical steam flow in turbines, or imperfect...
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
International Nuclear Information System (INIS)
Fazio, C; Guastella, I; Tarantino, G
2007-01-01
In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed
A Cyclical Approach to Continuum Modeling: A Conceptual Model of Diabetic Foot Care
Directory of Open Access Journals (Sweden)
Martha L. Carvour
2017-12-01
Full Text Available “Cascade” or “continuum” models have been developed for a number of diseases and conditions. These models define the desired, successive steps in care for that disease or condition and depict the proportion of the population that has completed each step. These models may be used to compare care across subgroups or populations and to identify and evaluate interventions intended to improve outcomes on the population level. Previous cascade or continuum models have been limited by several factors. These models are best suited to processes with stepwise outcomes—such as screening, diagnosis, and treatment—with a single defined outcome (e.g., treatment or cure for each member of the population. However, continuum modeling is not well developed for complex processes with non-sequential or recurring steps or those without singular outcomes. As shown here using the example of diabetic foot care, the concept of continuum modeling may be re-envisioned with a cyclical approach. Cyclical continuum modeling may permit incorporation of non-sequential and recurring steps into a single continuum, while recognizing the presence of multiple desirable outcomes within the population. Cyclical models may simultaneously represent the distribution of clinical severity and clinical resource use across a population, thereby extending the benefits of traditional continuum models to complex processes for which population-based monitoring is desired. The models may also support communication with other stakeholders in the process of care, including health care providers and patients.