WorldWideScience

Sample records for model simplification approach

  1. Solid model design simplification

    Energy Technology Data Exchange (ETDEWEB)

    Ames, A.L.; Rivera, J.J.; Webb, A.J.; Hensinger, D.M.

    1997-12-01

    This paper documents an investigation of approaches to improving the quality of Pro/Engineer-created solid model data for use by downstream applications. The investigation identified a number of sources of problems caused by deficiencies in Pro/Engineer`s geometric engine, and developed prototype software capable of detecting many of these problems and guiding users towards simplified, useable models. The prototype software was tested using Sandia production solid models, and provided significant leverage in attacking the simplification problem.

  2. Practical simplifications for radioimmunotherapy dosimetric models

    Energy Technology Data Exchange (ETDEWEB)

    Shen, S.; DeNardo, G.L.; O`Donnell, R.T.; Yuan, A.; DeNardo, D.A.; Macey, D.J.; DeNardo, S.J. [Univ. of California, Sacramento, CA (United States). Davis Medical Center

    1999-01-01

    Radiation dosimetry is potentially useful for assessment and prediction of efficacy and toxicity for radionuclide therapy. The usefulness of these dose estimates relies on the establishment of a dose-response model using accurate pharmacokinetic data and a radiation dosimetric model. Due to the complexity in radiation dose estimation, many practical simplifications have been introduced in the dosimetric modeling for clinical trials of radioimmunotherapy. Although research efforts are generally needed to improve the simplifications used at each stage of model development, practical simplifications are often possible for specific applications without significant consequences to the dose-response model. In the development of dosimetric methods for radioimmunotherapy, practical simplifications in the dosimetric models were introduced. This study evaluated the magnitude of uncertainty associated with practical simplifications for: (1) organ mass of the MIRD phantom; (2) radiation contribution from target alone; (3) interpolation of S value; (4) macroscopic tumor uniformity; and (5) fit of tumor pharmacokinetic data.

  3. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  4. Model simplification and optimization of a passive wind turbine generator

    OpenAIRE

    Sareni, Bruno; Abdelli, Abdenour; Roboam, Xavier; Tran, Duc-Hoan

    2009-01-01

    International audience; In this paper, the design of a "low cost full passive structure" of wind turbine system without active electronic part (power and control) is investigated. The efficiency of such device can be obtained only if the design parameters are mutually adapted through an optimization design approach. For this purpose, sizing and simulating models are developed to characterize the behavior and the efficiency of the wind turbine system. A model simplification approach is present...

  5. A Novel Fast Method for Point-sampled Model Simplification

    Directory of Open Access Journals (Sweden)

    Cao Zhi

    2016-01-01

    Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.

  6. Study of Simplification of Markov Model for Analyzing System Dependability

    Energy Technology Data Exchange (ETDEWEB)

    Son, Gwang Seop; Kim, Dong Hoon; Choi, Jong Gyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    In this paper, we introduce the simplification methodology of the Markov model for analyzing system dependability using system failure rate concept. This system failure rate is the probability that the system is failed or unavailable given that the system was as good as at this time. Using this parameter, the Markov model of sub system can be replaced to the system failure rate and then this parameter just is considered in the Markov model of whole system. In this paper, we proposed the method to simplify the Markov model in complex system architecture. We define the system failure rate and using this parameter, the Markov model of system could be simplified.

  7. Surface Simplification of 3D Animation Models Using Robust Homogeneous Coordinate Transformation

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2014-01-01

    Full Text Available The goal of 3D surface simplification is to reduce the storage cost of 3D models. A 3D animation model typically consists of several 3D models. Therefore, to ensure that animation models are realistic, numerous triangles are often required. However, animation models that have a high storage cost have a substantial computational cost. Hence, surface simplification methods are adopted to reduce the number of triangles and computational cost of 3D models. Quadric error metrics (QEM has recently been identified as one of the most effective methods for simplifying static models. To simplify animation models by using QEM, Mohr and Gleicher summed the QEM of all frames. However, homogeneous coordinate problems cannot be considered completely by using QEM. To resolve this problem, this paper proposes a robust homogeneous coordinate transformation that improves the animation simplification method proposed by Mohr and Gleicher. In this study, the root mean square errors of the proposed method were compared with those of the method proposed by Mohr and Gleicher, and the experimental results indicated that the proposed approach can preserve more contour features than Mohr’s method can at the same simplification ratio.

  8. Simplification of multiple Fourier series - An example of algorithmic approach

    Science.gov (United States)

    Ng, E. W.

    1981-01-01

    This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.

  9. A Memory Insensitive Technique for Large Model Simplification

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Silva, C

    2001-08-07

    In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the ''representative vertex'' of a grid cell to an optimal position inside the cell.

  10. Towards simplification of hydrologic modeling: Identification of dominant processes

    Science.gov (United States)

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  11. A new model for the simplification of particle counting data

    Directory of Open Access Journals (Sweden)

    M. F. Fadal

    2012-06-01

    Full Text Available This paper proposes a three-parameter mathematical model to describe the particle size distribution in a water sample. The proposed model offers some conceptual advantages over two other models reported on previously, and also provides a better fit to the particle counting data obtained from 321 water samples taken over three years at a large South African drinking water supplier. Using the data from raw water samples taken from a moderately turbid, large surface impoundment, as well as samples from the same water after treatment, typical ranges of the model parameters are presented for both raw and treated water. Once calibrated, the model allows the calculation and comparison of total particle number and volumes over any randomly selected size interval of interest.

  12. A motion retargeting algorithm based on model simplification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.

  13. Simplification of physics-based electrochemical model for lithium ion battery on electric vehicle. Part I: Diffusion simplification and single particle model

    Science.gov (United States)

    Han, Xuebing; Ouyang, Minggao; Lu, Languang; Li, Jianqiu

    2015-03-01

    Now the lithium ion batteries are widely used in electrical vehicles (EV). The battery modeling and state estimation is of great significance. The rigorous physic based electrochemical model is too complicated for on-line simulation in vehicle. In this work, the simplification of physics-based model lithium ion battery for application in battery management system (BMS) on real electrical vehicle is proposed. Approximate method for solving the solid phase diffusion and electrolyte concentration distribution problems is introduced. The approximate result is very close to the rigorous model but fewer computations are needed. An extended single particle model is founded based on these approximated results and the on-line state of charge (SOC) estimation algorithm using the extended Kalman filter with this single particle model is discussed. This SOC estimation algorithm could be used in the BMS in real vehicle.

  14. Advanced Spacecraft EM Modelling Based on Geometric Simplification Process and Multi-Methods Simulation

    Science.gov (United States)

    Leman, Samuel; Hoeppe, Frederic

    2016-05-01

    This paper is about the first results of a new generation of ElectroMagnetic (EM) methodology applied to spacecraft systems modelling in the low frequency range (system's dimensions are of the same order of magnitude as the wavelength).This innovative approach aims at implementing appropriate simplifications of the real system based on the identification of the dominant electrical and geometrical parameters driving the global EM behaviour. One rigorous but expensive simulation is performed to quantify the error generated by the use of simpler multi-models. If both the speed up of the simulation time and the quality of the EM response are satisfied, uncertainty simulation could be performed based on the simple models library implementing in a flexible and robust Kron's network formalism.This methodology is expected to open up new perspectives concerning fast parametric analysis, and deep understanding of systems behaviour. It will ensure the identification of main radiated and conducted coupling paths and the sensitive EM parameters in order to optimize the protections and to control the disturbance sources in spacecraft design phases.

  15. Effects of model layer simplification using composite hydraulic properties

    Science.gov (United States)

    Kuniansky, Eve L.; Sepulveda, Nicasio; Elango, Lakshmanan

    2011-01-01

    Groundwater provides much of the fresh drinking water to more than 1.5 billion people in the world (Clarke et al., 1996) and in the United States more that 50 percent of citizens rely on groundwater for drinking water (Solley et al., 1998). As aquifer systems are developed for water supply, the hydrologic system is changed. Water pumped from the aquifer system initially can come from some combination of inducing more recharge, water permanently removed from storage, and decreased groundwater discharge. Once a new equilibrium is achieved, all of the pumpage must come from induced recharge and decreased discharge (Alley et al., 1999). Further development of groundwater resources may result in reductions of surface water runoff and base flows. Competing demands for groundwater resources require good management. Adequate data to characterize the aquifers and confining units of the system, like hydrologic boundaries, groundwater levels, streamflow, and groundwater pumping and climatic data for recharge estimation are to be collected in order to quantify the effects of groundwater withdrawals on wetlands, streams, and lakes. Once collected, three-dimensional (3D) groundwater flow models can be developed and calibrated and used as a tool for groundwater management. The main hydraulic parameters that comprise a regional or subregional model of an aquifer system are the hydraulic conductivity and storage properties of the aquifers and confining units (hydrogeologic units) that confine the system. Many 3D groundwater flow models used to help assess groundwater/surface-water interactions require calculating ?effective? or composite hydraulic properties of multilayered lithologic units within a hydrogeologic unit. The calculation of composite hydraulic properties stems from the need to characterize groundwater flow using coarse model layering in order to reduce simulation times while still representing the flow through the system accurately. The accuracy of flow models with

  16. Modeling gene regulatory networks: A network simplification algorithm

    Science.gov (United States)

    Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.

    2016-12-01

    Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.

  17. Inductive Voltage Adder Network Analysis and Model Simplification

    Science.gov (United States)

    2007-06-01

    ORGANIZATION NAME(S) AND ADDRESS(ES) Brookhaven National Laboratory Upton, NY 11973 USA 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING ... Kicker Pulser for DARHT-II”, Proceedings of the 20th International LINAC Conference, pp. 509-511, 2000. [4] Wang, G. J. Caporaso, E. G. Cook...Modeling of an Inductive Adder Kicker Pulser for a Proton Radiography System”, Digest of Technical Papers, Pulsed Power Plasma Science, 2001. PPPS-2001

  18. Effects of model layer simplification using composite hydraulic properties

    Science.gov (United States)

    Sepulveda, Nicasio; Kuniansky, Eve L.

    2010-01-01

    The effects of simplifying hydraulic property layering within an unconfined aquifer and the underlying confining unit were assessed. The hydraulic properties of lithologic units within the unconfined aquifer and confining unit were computed by analyzing the aquifer-test data using radial, axisymmetric two-dimensional (2D) flow. Time-varying recharge to the unconfined aquifer and pumping from the confined Upper Floridan aquifer (USA) were simulated using 3D flow. Conceptual flow models were developed by gradually reducing the number of lithologic units in the unconfined aquifer and confining unit by calculating composite hydraulic properties for the simplified lithologic units. Composite hydraulic properties were calculated using either thickness-weighted averages or inverse modeling using regression-based parameter estimation. No significant residuals were simulated when all lithologic units comprising the unconfined aquifer were simulated as one layer. The largest residuals occurred when the unconfined aquifer and confining unit were aggregated into a single layer (quasi-3D), with residuals over 100% for the leakage rates to the confined aquifer and the heads in the confining unit. Residuals increased with contrasts in vertical hydraulic conductivity between the unconfined aquifer and confining unit. Residuals increased when the constant-head boundary at the bottom of the Upper Floridan aquifer was replaced with a no-flow boundary.

  19. Simplifications in modelling of dynamical response of coupled electro-mechanical system

    Science.gov (United States)

    Darula, Radoslav; Sorokin, Sergey

    2016-12-01

    The choice of a most suitable model of an electro-mechanical system depends on many variables, such as a scale of the system, type and frequency range of its operation, or power requirements. The article focuses on the model of the electromagnetic element used in passive regime (no feedback loops are assumed) and a general lumped parameter model (a conventional mass-spring-damper system coupled to an electric circuit consisting of a resistance, an inductance and a capacitance) is compared with its simplified version, where the full RLC circuit is replaced with its RL simplification, i.e. the capacitance of the electric system is neglected and just its inductance and the resistance are considered. From the comparison of dynamical responses of these systems, the range of applicability of a simplified model is assessed for free as well as forced vibration.

  20. A New Skeleton Feature Extraction Method for Terrain Model Using Profile Recognition and Morphological Simplification

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available It is always difficul to reserve rings and main truck lines in the real engineering of feature extraction for terrain model. In this paper, a new skeleton feature extraction method is proposed to solve these problems, which put forward a simplification algorithm based on morphological theory to eliminate the noise points of the target points produced by classical profile recognition. As well all know, noise point is the key factor to influence the accuracy and efficiency of feature extraction. Our method connected the optimized feature points subset after morphological simplification; therefore, the efficiency of ring process and pruning has been improved markedly, and the accuracy has been enhanced without the negative effect of noisy points. An outbranching concept is defined, and the related algorithms are proposed to extract sufficient long trucks, which is capable of being consistent with real terrain skeleton. All of algorithms are conducted on many real experimental data, including GTOPO30 and benchmark data provided by PPA to verify the performance and accuracy of our method. The results showed that our method precedes PPA as a whole.

  1. Simplification of a pharmacokinetic model for red blood cell methotrexate disposition.

    Science.gov (United States)

    Pan, Shan; Korell, Julia; Stamp, Lisa K; Duffull, Stephen B

    2015-12-01

    A pharmacokinetic (PK) model is available for describing the time course of the concentrations of methotrexate (MTX or MTXGlu1) and its active polyglutamated metabolites (MTXGlu2-5) in red blood cells (RBCs). In this study, we aimed to simplify the MTX PK model and to optimise the blood sampling schedules for use in future studies. A proper lumping technique was used to simplify the original MTX RBC PK model. The sum of predicted RBC MTXGlu3-5 concentrations in both the simplified and original models was compared. The sampling schedules for MTXGlu3-5 or all MTX polyglutamates in RBCs were optimised using the Population OPTimal design (POPT) software. The MTX RBC PK model was simplified into a three-state model. The maximum of the absolute value of relative difference in the sum of predicted RBC MTXGlu3-5 concentrations over time was 6.3 %. A five blood sample design was identified for estimating parameters of the simplified model. This study illustrates the application of model simplification processes to an existing model for MTX RBC PK. The same techniques illustrated in our study may be adopted by other studies with similar interest.

  2. Application-oriented simplification of actuation mechanism and physical model for ionic polymer-metal composites

    Science.gov (United States)

    Zhu, Zicai; Wang, Yanjie; Liu, Yanfa; Asaka, Kinji; Sun, Xiaofei; Chang, Longfei; Lu, Pin

    2016-07-01

    Water containing ionic polymer-metal composites (IPMCs) show complex deformation properties with water content. In order to develop a simple application-oriented model for engineering application, actuation mechanisms and model equations should be simplified as necessary. Beginning from our previous comprehensive multi-physical model of IPMC actuator, numerical analysis was performed to obtain the main factors influencing the bending deformation and the corresponding simplified model. In this paper, three aspects are mainly concerned. (1) Regarding mass transport process, the diffusion caused by concentration gradient mainly influences the concentrations of cation and water at the two electrode boundaries. (2) By specifying the transport components as hydrated cation and free water in the model, at the cathode, the hydrated cation concentration profile is more flat, whereas the concentrations of both free water and the total water show drastic changes. In general, the two influence the redistribution of cation and water but have little impact on deformation prediction. Thus, they can be ignored in the simplification. (3) An extended osmotic pressure is proposed to cover all eigen stresses simply with an effective osmotic coefficient. Combining with a few other linearized methods, a simplified model has been obtained by sacrificing the prediction precision on the transport process. Furthermore, the improved model has been verified by fitting with IPMC deformation evolved with water content. It shows that the simplified model has the ability to predict the complex deformations of IPMCs.

  3. Investigation of Model Simplification and Its Influence on the Accuracy in FEM Magnetic Calculations of Gearless Drives

    DEFF Research Database (Denmark)

    Andersen, Søren Bøgh; Santos, Ilmar F.; Fuerst, Axel

    2012-01-01

    Finite-element models of electrical motors often become very complex and time consuming to evaluate when taking into account every little detail. There is therefore a need for simplifications to make the models computational within a reasonable time frame. This is especially important in an optim......Finite-element models of electrical motors often become very complex and time consuming to evaluate when taking into account every little detail. There is therefore a need for simplifications to make the models computational within a reasonable time frame. This is especially important...... in an optimization process, as many iterations usually have to be performed. The focus of this work is an investigation of the electromagnetic part of a gearless mill drive based on real system data which is part of a larger project building a multiphysics model including electromagnet, thermal, and structural...

  4. Simplification of the Flux Function for a Higher-order Gas-kinetic Evolution Model

    CERN Document Server

    Zhou, Guangzhao; Liu, Feng

    2016-01-01

    The higher-order gas-kinetic scheme for solving the Navier-Stokes equations has been studied in recent years. In addition to the use of higher-order reconstruction techniques, many terms are used in the Taylor expansion of the gas distribution functions. Therefore, a large number of coefficients need to be determined in the calculation of the time evolution of the gas distribution function at cell interfaces. As a consequence, the higher-order flux function takes much more computational time than that of a second-order gas-kinetic scheme. This paper aims to simplify the evolution model by two steps. Firstly, the coefficients related to the higher-order spatial and temporal derivatives of a distribution function are redefined to reduce the computational cost. Secondly, based on the physical analysis, some terms can be removed without loss of accuracy. Through the simplifications, the computational efficiency of the higher-order scheme is increased significantly. In addition, a self-adaptive numerical viscosity...

  5. A "Kane's Dynamics" Model for the Active Rack Isolation System Part Two: Nonlinear Model Development, Verification, and Simplification

    Science.gov (United States)

    Beech, G. S.; Hampton, R. D.; Rupert, J. K.

    2004-01-01

    Many microgravity space-science experiments require vibratory acceleration levels that are unachievable without active isolation. The Boeing Corporation's active rack isolation system (ARIS) employs a novel combination of magnetic actuation and mechanical linkages to address these isolation requirements on the International Space Station. Effective model-based vibration isolation requires: (1) An isolation device, (2) an adequate dynamic; i.e., mathematical, model of that isolator, and (3) a suitable, corresponding controller. This Technical Memorandum documents the validation of that high-fidelity dynamic model of ARIS. The verification of this dynamics model was achieved by utilizing two commercial off-the-shelf (COTS) software tools: Deneb's ENVISION(registered trademark), and Online Dynamics Autolev(trademark). ENVISION is a robotics software package developed for the automotive industry that employs three-dimensional computer-aided design models to facilitate both forward and inverse kinematics analyses. Autolev is a DOS-based interpreter designed, in general, to solve vector-based mathematical problems and specifically to solve dynamics problems using Kane's method. The simplification of this model was achieved using the small-angle theorem for the joint angle of the ARIS actuators. This simplification has a profound effect on the overall complexity of the closed-form solution while yielding a closed-form solution easily employed using COTS control hardware.

  6. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    , and the present paper is an attempt to fill this gap. On the theoretical side, a general characterization is introduced of the problem of simplification of integrity constraints and a natural definition is given of what it means for a simplification procedure to be ideal. We prove that ideality of simplification...... is strictly related to query containment; in fact, an ideal simplification procedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based...

  7. For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia

    CERN Document Server

    Yatskar, Mark; Danescu-Niculescu-Mizil, Cristian; Lee, Lillian

    2010-01-01

    We report on work in progress on extracting lexical simplifications (e.g., "collaborate" -> "work together"), focusing on utilizing edit histories in Simple English Wikipedia for this task. We consider two main approaches: (1) deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and (2) using metadata to focus on edits that are more likely to be simplification operations. We find our methods to outperform a reasonable baseline and yield many high-quality lexical simplifications not included in an independently-created manually prepared list.

  8. Infrastructure Area Simplification Plan

    CERN Document Server

    Field, L.

    2011-01-01

    The infrastructure area simplification plan was presented at the 3rd EMI All Hands Meeting in Padova. This plan only affects the information and accounting systems as the other areas are new in EMI and hence do not require simplification.

  9. Simplification approach to detect urban areas vulnerable to flash floods using GIS: a case study Warsaw

    Science.gov (United States)

    Wicht, Marzena; Osińska-Skotak, Katarzyna

    2016-04-01

    The aim of this study is to develop a consistent methodology to determine urban areas that are particularly vulnerable to the effects of torrential rains. They are, as a result of climate change, more and more prevalent in the temperate climate, usually spring - summer from mid-May to late August - and involve the risk of flash floods. In recent years, the increase in the incidence of such phenomena is noticeable throughout the whole Europe. It is assumed that through the analysis of environmental and infrastructural conditions, using the developed methodology, it is possible to determine areas vulnerable to flooding due to torrential rains. This may lead to a better management, quicker response in case of a phenomenon, and even to take measures to prevent the occurrence of adverse effects of torrential rains (for instance modernization of the urban drainage system and development of methods to get rid of rapidly collected water). Designation of areas particularly vulnerable to the effects of heavy rains can be achieved by adapting hydrological models, but they require an appropriate adjustment and highly accurate input data: (based on spot or radar measurements of precipitation, land cover, soil type, humidity, wind speed, vegetation species in a given area, growing season, the roughness and porosity of the cover and soil moisture) but such detailed data are generally hard to obtain or not available for less developed areas. It could also be achieved by performing spatial analysis in GIS, which is a more simplified form of modelling, but it gives results more quickly and the methodology can be adapted to the commonly available data. A case study of Warsaw's district Powiśle has been undertaken for three epochs - from 2008 to 2010 and areas, that are particularly vulnerable to the effects of flash floods and heavy rains, have been designated.

  10. [Study on simplification of extraction kinetics model and adaptability of total flavonoids model of Scutellariae radix].

    Science.gov (United States)

    Chen, Yang; Zhang, Jin; Ni, Jian; Dong, Xiao-Xu; Xu, Meng-Jie; Dou, Hao-Ran; Shen, Ming-Rui; Yang, Bo-Di; Fu, Jing

    2014-01-01

    Because of irregular shapes of Chinese herbal pieces, we simplified the previously deduced general extraction kinetic model for TCMs, and integrated particle diameters of Chinese herbs that had been hard to be determined in the final parameter "a". The reduction of the direct determination of particle diameters of Chinese herbs was conducive to increase the accuracy of the model, expand the application scope of the model, and get closer to the actual production conditions. Finally, a simplified model was established, with its corresponding experimental methods and data processing methods determined. With total flavonoids in Scutellariae Radix as the determination index, we conducted a study on the adaptability of total flavonoids extracted from Scutellariae Radix with the water decoction method in the model. The results showed a good linear correlation among the natural logarithm value of the mass concentration of total flavonoids in Scutellariae Radix, the time and the changes in the natural logarithm of solvent multiple. Through calculating and fitting, efforts were made to establish the kinetic model of extracting total flavonoids from Scutellariae Radix with the water decoction method, and verify the model, with a good degree of fitting and deviation within the range of the industrial production requirements. This indicated that the model established by the method has a good adaptability.

  11. Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?

    Science.gov (United States)

    Sprintsin, M.; Chen, J. M.

    2009-05-01

    The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit

  12. About Bifurcational Parametric Simplification

    CERN Document Server

    Gol'dshtein, V; Yablonsky, G

    2015-01-01

    A concept of "critical" simplification was proposed by Yablonsky and Lazman in 1996 for the oxidation of carbon monoxide over a platinum catalyst using a Langmuir-Hinshelwood mechanism. The main observation was a simplification of the mechanism at ignition and extinction points. The critical simplification is an example of a much more general phenomenon that we call \\emph{a bifurcational parametric simplification}. Ignition and extinction points are points of equilibrium multiplicity bifurcations, i.e., they are points of a corresponding bifurcation set for parameters. Any bifurcation produces a dependence between system parameters. This is a mathematical explanation and/or justification of the "parametric simplification". It leads us to a conjecture that "maximal bifurcational parametric simplification" corresponds to the "maximal bifurcation complexity." This conjecture can have practical applications for experimental study, because at points of "maximal bifurcation complexity" the number of independent sys...

  13. Visual salience guided feature-aware shape simplification

    Institute of Scientific and Technical Information of China (English)

    Yong-wei MIAO; Fei-xia HU; Min-yan CHEN; Zhen LIU; Hua-hao SHOU

    2014-01-01

    In the area of 3D digital engineering and 3D digital geometry processing, shape simplification is an important task to reduce their requirement of large memory and high time complexity. By incorporating the content-aware visual salience measure of a polygonal mesh into simplification operation, a novel feature-aware shape simplification approach is presented in this paper. Owing to the robust extraction of relief heights on 3D highly detailed meshes, our visual salience measure is defined by a center-surround operator on Gaussian-weighted relief heights in a scale-dependent manner. Guided by our visual salience map, the feature-aware shape simplification algorithm can be performed by weighting the high-dimensional feature space quadric error metric of vertex pair contractions with the weight map derived from our visual salience map. The weighted quadric error metric is calculated in a six-dimensional feature space by combining the position and normal information of mesh vertices. Experimental results demonstrate that our visual salience guided shape simplification scheme can adaptively and effectively re-sample the underlying models in a feature-aware manner, which can account for the visually salient features of the complex shapes and thus yield better visual fidelity.

  14. On the simplifications for the thermal modeling of tilting-pad journal bearings under thermoelastohydrodynamic regime

    DEFF Research Database (Denmark)

    Cerda Varela, Alejandro Javier; Fillon, Michel; Santos, Ilmar

    2012-01-01

    The relevance of calculating accurately the oil film temperature build up when modeling tilting-pad journal bearings is well established within the literature on the subject. This work studies the feasibility of using a thermal model for the tilting-pad journal bearing which includes a simplified...

  15. Modeling Attitude Variance in Small UAS’s for Acoustic Signature Simplification Using Experimental Design in a Hardware-in-the-Loop Simulation

    Science.gov (United States)

    2015-03-26

    MODELING ATTITUDE VARIANCE IN SMALL UAS’S FOR ACOUSTIC SIGNATURE SIMPLIFICATION USING EXPERIMENTAL...and is not subject to copyright protection in the United States. AFIT-ENS-MS-15-M-110 MODELING ATTITUDE VARIANCE IN SMALL UAS’S FOR ACOUSTIC...USAF March 2015 DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENS-MS-15-M-110 MODELING ATTITUDE VARIANCE

  16. Simplification and improvement of prediction model for elastic modulus of particulate reinforced metal matrix composite

    Institute of Scientific and Technical Information of China (English)

    WANG Wen-ming; PAN Fu-sheng; LU Yun; ZENG Su-min

    2006-01-01

    In this paper, we proposed a five-zone model to predict the elastic modulus of particulate reinforced metal matrix composite. We simplified the calculation by ignoring structural parameters including particulate shape, arrangement pattern and dimensional variance mode which have no obvious influence on the elastic modulus of a composite, and improved the precision of the method by stressing the interaction of interfaces with pariculates and maxtrix of the composite. The five- zone model can reflect effects of interface modulus on elastic modulus of composite. It overcomes limitations of expressions of rigidity mixed law and flexibility mixed law. The original idea of five zone model is to put forward the particulate/interface interactive zone and matrix/interface interactive zone. By organically integrating the rigidity mixed law and flexibility mixed law,the model can predict the engineering elastic constant of a composite effectively.

  17. Simplification of the tug-of-war model for cellular transport in cells

    CERN Document Server

    Zhang, Yunxin

    2010-01-01

    The transport of organelles and vesicles in living cells can be well described by a kinetic tug-of-war model advanced by M\\"uller, Klumpp and Lipowsky. In which, the cargo is attached by two motor species, kinesin and dynein, and the direction of motion is determined by the number of motors which bind to the track. In recent work [Phys. Rev. E 79, 061918 (2009)], this model was studied by mean field theory, and it was found that, usually the tug-of-war model has one, two, or three distinct stable stationary points. However, the results there are mostly obtained by numerical calculations, since it is hard to do detailed theoretical studies to a two-dimensional nonlinear system. In this paper, we will carry out further detailed analysis about this model, and try to find more properties theoretically. Firstly, the tug-of-war model is simplified to a one-dimensional equation. Then we claim that the stationary points of the tug-of-war model correspond to the roots of the simplified equation, and the stable station...

  18. Simplification of high order polynomial calibration model for fringe projection profilometry

    Science.gov (United States)

    Yu, Liandong; Zhang, Wei; Li, Weishi; Pan, Chengliang; Xia, Haojie

    2016-10-01

    In fringe projection profilometry systems, high order polynomial calibration models can be employed to improve the accuracy. However, it is not stable to fit a high order polynomial model with least-squares algorithms. In this paper, a novel method is presented to analyze the significance of each polynomial term and simplify the high order polynomial calibration model. Term significance is evaluated by comparing the loading vector elements of the first few principal components which are obtained with the principal component analysis, and trivial terms are identified and neglected from the high order polynomial calibration model. As a result, the high order model is simplified with significant improvement of computation stability and little loss of reconstruction accuracy. An interesting finding is that some terms of 0 and 1st order, as well as some high order terms related to the image direction that is vertical to the phase change direction, are trivial terms for this specific problem. Experimental results are shown to validate of the proposed method.

  19. Simplification and analysis of a model of social interaction in voting

    CERN Document Server

    Lafuerza, Luis F; Edmonds, Bruce; McKane, Alan J

    2015-01-01

    A recently proposed model of social interaction in voting is investigated by simplifying it down into a version that is more analytically tractable and which allows a mathematical analysis to be performed. This analysis clarifies the interplay of the different elements present in the system --- social influence, heterogeneity and noise --- and leads to a better understanding of its properties. The origin of a regime of bistability is identified. The insight gained in this way gives further intuition into the behaviour of the original model.

  20. Simplification and analysis of a model of social interaction in voting

    Science.gov (United States)

    Lafuerza, Luis F.; Dyson, Louise; Edmonds, Bruce; McKane, Alan J.

    2016-06-01

    A recently proposed model of social interaction in voting is investigated by simplifying it down into a version that is more analytically tractable and which allows a mathematical analysis to be performed. This analysis clarifies the interplay of the different elements present in the system - social influence, heterogeneity and noise - and leads to a better understanding of its properties. The origin of a regime of bistability is identified. The insight gained in this way gives further intuition into the behaviour of the original model.

  1. Impediments to predicting site response: Seismic property estimation and modeling simplifications

    Science.gov (United States)

    Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Guzina, B.B.

    2009-01-01

    We compare estimates of the empirical transfer function (ETF) to the plane SH-wave theoretical transfer function (TTF) within a laterally constant medium for invasive and noninvasive estimates of the seismic shear-wave slownesses at 13 Kiban-Kyoshin network stations throughout Japan. The difference between the ETF and either of the TTFs is substantially larger than the difference between the two TTFs computed from different estimates of the seismic properties. We show that the plane SH-wave TTF through a laterally homogeneous medium at vertical incidence inadequately models observed amplifications at most sites for both slowness estimates, obtained via downhole measurements and the spectral analysis of surface waves. Strategies to improve the predictions can be separated into two broad categories: improving the measurement of soil properties and improving the theory that maps the 1D soil profile onto spectral amplification. Using an example site where the 1D plane SH-wave formulation poorly predicts the ETF, we find a more satisfactory fit to the ETF by modeling the full wavefield and incorporating spatially correlated variability of the seismic properties. We conclude that our ability to model the observed site response transfer function is limited largely by the assumptions of the theoretical formulation rather than the uncertainty of the soil property estimates.

  2. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications; Storregional grundvattenmodellering - en kaenslighetsstudie av naagra utvalda konceptuella beskrivningar och foerenklingar

    Energy Technology Data Exchange (ETDEWEB)

    Ericsson, Lars O. (Lars O. Ericsson Consulting AB (Sweden)); Holmen, Johan (Golder Associates (Sweden))

    2010-12-15

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  3. Influence of Model Simplifications Excitation Force in Surge for a Floating Foundation for Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Andersen, Morten Thøtt; Hindhede, Dennis; Lauridsen, Jimmy

    2015-01-01

    , thereby, increases the difficulty in wave force determination due to limitations of the commonly used simplified methods. This paper deals with a physical model test of the hydrodynamic excitation force in surge on a fixed three-columned structure intended as a floating foundation for offshore wind......As offshore wind turbines move towards deeper and more distant sites, the concept of floating foundations is a potential technically and economically attractive alternative to the traditional fixed foundations. Unlike the well-studied monopile, the geometry of a floating foundation is complex and...

  4. Influence of Model Simplifications Excitation Force in Surge for a Floating Foundation for Offshore Wind Turbines

    Directory of Open Access Journals (Sweden)

    Morten Thøtt Andersen

    2015-04-01

    Full Text Available As offshore wind turbines move towards deeper and more distant sites, the concept of floating foundations is a potential technically and economically attractive alternative to the traditional fixed foundations. Unlike the well-studied monopile, the geometry of a floating foundation is complex and, thereby, increases the difficulty in wave force determination due to limitations of the commonly used simplified methods. This paper deals with a physical model test of the hydrodynamic excitation force in surge on a fixed three-columned structure intended as a floating foundation for offshore wind turbines. The experiments were conducted in a wave basin at Aalborg University. The test results are compared with a Boundary Element Method code based on linear diffraction theory for different wave force regimes defined by the column diameter, wave heights and lengths. Furthermore, the study investigates the influence of incident wave direction and stabilizing heave-plates. The structure can be divided into primary, secondary and tertiary parts, defined by the columns, heave-plates and braces to determine the excitation force in surge. The test results are in good agreement with the numerical computation for the primary parts only, which leads to simplified determination of peak frequencies and corresponding dominant force regime.

  5. Equivalent Simplification Method of Micro-Grid

    Directory of Open Access Journals (Sweden)

    Cai Changchun

    2013-09-01

    Full Text Available The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are performed for the test of the equivalent model of micro-grid. The simulation results show that the equivalent model of micro-grid is effective, and the dynamic of equivalent model is similar with the detailed model of micro-grid. The equivalent simplification method for the micro-grid network and distributed components is suitable for the study of micro-grid.  

  6. Homotopic Polygonal Line Simplification

    DEFF Research Database (Denmark)

    Deleuran, Lasse Kosetski

    of the paths. For an input consisting of n paths with the total size of m, our algorithm improves the running time from O(n log^(1+e) n + m log n) to O(n log^(1+e) n + m), where e > 0. - Heuristic algorithms are simplification algorithm where the reduction based on the complexity measure is not necessarily...

  7. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    is strictly related to query containment; in fact, an ideal simplification procedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based......Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact...... take place before the execution of the update, so that only consistency-preserving updates are eventually given to the database. The extension to more expressive languages and the application of the framework to other contexts, such as data integration and concurrent database systems, are also...

  8. Assessing the Impact of Canopy Structure Simplification in Common Multilayer Models on Irradiance Absorption Estimates of Measured and Virtually Created Fagus sylvatica (L. Stands

    Directory of Open Access Journals (Sweden)

    Pol Coppin

    2009-11-01

    Full Text Available Multilayer canopy representations are the most common structural stand representations due to their simplicity. Implementation of recent advances in technology has allowed scientists to simulate geometrically explicit forest canopies. The effect of simplified representations of tree architecture (i.e., multilayer representations of four Fagus sylvatica (L. stands, each with different LAI, on the light absorption estimates was assessed in comparison with explicit 3D geometrical stands. The absorbed photosynthetic radiation at stand level was calculated. Subsequently, each geometrically explicit 3D stand was compared with three multilayer models representing horizontal, uniform, and planophile leaf angle distributions. The 3D stands were created either by in situ measured trees or by modelled trees generated with the AMAP plant growth software. The Physically Based Ray Tracer (PBRT algorithm was used to simulate the irradiance absorbance of the detailed 3D architecture stands, while for the three multilayer representations, the probability of light interception was simulated by applying the Beer-Lambert’s law. The irradiance inside the canopies was characterized as direct, diffuse and scattered irradiance. The irradiance absorbance of the stands was computed during eight angular sun configurations ranging from 10° (near nadir up to 80° sun zenith angles. Furthermore, a leaf stratification (the number and angular distribution of leaves per LAI layer inside a canopy analysis between the 3D stands and the multilayer representations was performed, indicating the amount of irradiance each leaf is absorbing along with the percentage of sunny and shadow leaves inside the canopy. The results reveal that a multilayer representation of a stand, using a multilayer modelling approach, greatly overestimated the absorbed irradiance in an open canopy, while it provided a better approximation in the case of a closed canopy. Moreover, the actual stratification

  9. Impact of model structure simplifications on the performance of a distributed physically-based soil erosion model at the hillslope scale

    Science.gov (United States)

    Cea, Luis; Legoût, Cédric; Grangeon, Thomas; Nord, Guillaume

    2016-04-01

    In order to make affordable the use of physcially based soil erosion models in field applications it is often necessary to reduce the number of parameters or adapt the calibration method to the available data sets. In this study we analyse how the performance and calibration of a distributed event-based soil erosion model at the hillslope scale are affected by different simplifications on the parameterisations used to compute the production of suspended sediment by rainfall and runoff. Six modelling scenarios of different complexity are used to evaluate the temporal variability of the sedimentograph at the outlet of a 60 m long cultivated hillslope. The six scenarios are calibrated within the GLUE framework in order to account for parameter uncertainty, and their performance is evaluated against experimental data registered during five storm events. The NSE, PBIAS and coverage performance ratios show that the sedimentary response of the hillslope in terms of mass flux of eroded soil can be efficiently captured by a model structure including only two soil erodibility parameters which control the rainfall and runoff production of suspended sediment. Increasing the number of parameters makes the calibration process more complex without increasing in a noticeable manner the predictive capability of the model.

  10. 顾及约束的网络道路三维模型简化方法%Web-based road 3D model simplification method considering constraints

    Institute of Scientific and Technical Information of China (English)

    蒲浩; 李伟; 赵海峰; 宋占峰

    2013-01-01

    针对道路三维模型数据呈海量,且包含大量约束边界的特点,提出顾及约束的网络道路三维模型简化方法.在服务器端已建立道路三维整体模型的基础上,首先提出顾及约束的半边折叠误差度量方法;然后,在服务器端采用半边折叠操作对初始道路模型进行整体简化,同时建立操作层次树;最后,建立远程视相关模型重构准则,在操作层次树上确定需传输至客户端的视相关结点数据,并结合约束边优先策略,在客户端实现道路三维模型的快速视相关重构.研究结果表明:该方法简化率高,远程动态浏览时保留必要的约束边界,需要传输的数据量小,满足道路远程实时交互式可视化要求.%Based on the fact that the road 3D model has massive data and a large number of constraints, a web-based road 3D model simplification method considering constraints was put forward. The road 3D integrated model was built on the server beforehand. First, a half-edge collapse error metric that considered a large number of road constrained edges was proposed. Then, original road model was integrated simplified on the server by half-edge collapse, and meanwhile operating hierarchical tree was built. Finally, remote view-dependent reconstruction criteria were established. According to these criteria, minimum nodes data that needed to be transferred to client were quickly selected in the operating hierarchical tree. Combined with constrained edges priority strategy, road 3D model quickly view-dependent reconstruction was realized on client. The results show that high simplification rate can be obtained, the necessary constrained edges can be retained, small scale of data transmitted through network is needed in the process of remote dynamic browsing, and the requirements of road remote real-time interactive visualization are met.

  11. Effect of simplifications of bone and components inclination on the elastohydrodynamic lubrication modeling of metal-on-metal hip resurfacing prosthesis.

    Science.gov (United States)

    Meng, Qingen; Liu, Feng; Fisher, John; Jin, Zhongmin

    2013-05-01

    It is important to study the lubrication mechanism of metal-on-metal hip resurfacing prosthesis in order to understand its overall tribological performance, thereby minimize the wear particles. Previous elastohydrodynamic lubrication studies of metal-on-metal hip resurfacing prosthesis neglected the effects of the orientations of the cup and head. Simplified pelvic and femoral bone models were also adopted for the previous studies. These simplifications may lead to unrealistic predictions. For the first time, an elastohydrodynamic lubrication model was developed and solved for a full metal-on-metal hip resurfacing arthroplasty. The effects of the orientations of components and the realistic bones on the lubrication performance of metal-on-metal hip resurfacing prosthesis were investigated by comparing the full model with simplified models. It was found that the orientation of the head played a very important role in the prediction of pressure distributions and film profiles of the metal-on-metal hip resurfacing prosthesis. The inclination of the hemispherical cup up to 45° had no appreciable effect on the lubrication performance of the metal-on-metal hip resurfacing prosthesis. Moreover, the combined effect of material properties and structures of bones was negligible. Future studies should focus on higher inclination angles, smaller coverage angle and microseparation related to the occurrences of edge loading.

  12. 在动态仿真中风电场模型的简化%Simplification of Wind Farm Model for Dynamic Simulation

    Institute of Scientific and Technical Information of China (English)

    黄梅; 万航羽

    2009-01-01

    Simplification of wind farm model is studied for dynamic simulation about induction generator(IG) farm and double fed induction generator(DFIG) farm in this paper. The rule for simplifying wind farm dynamic model is the same or similar operating point to wind turbines and generators. According to wake effect, the wind farms are divided into regions, and some wind turbine-generators are merged as one for establishing the simplified model of wind farm. Focusing on the type of wind turbine-generators and the dynamic process, such as, wind fluctuation, short cirut fault, the availability of simplified models is verified by comparing detailed model with simplified models in varying degrees on simulating, and the application of simplified models is proposed for dynamic simulation.%分别针对异步风力发电机组风电场和双馈风力发电机组风电场,研究在动态仿真中风电场模型的简化.以风力机和发电机具有相同或相近运行点为风电场动态模型简化原则,依据尾流效应影响对风电场进行区域划分,将风力发电机组合并简化,建立风电场整体简化模型.针对风力发电机组类型和风速变化、电力系统短路的动态过程,通过仿真对比详细模型和不同程度的简化模型,验证各种简化模型的适用性,提出在动态仿真中风电场简化模型的应用建议.

  13. A simplification of Cobelli's glucose-insulin model for type 1 diabetes mellitus and its FPGA implementation.

    Science.gov (United States)

    Li, Peng; Yu, Lei; Fang, Qiang; Lee, Shuenn-Yuh

    2016-10-01

    Cobelli's glucose-insulin model is the only computer simulator of glucose-insulin interactions accepted by Food Drug Administration as a substitute to animal trials. However, it consists of multiple differential equations that make it hard to be implemented on a hardware platform. In this investigation, the Cobelli's model is simplified by Padé approximant method and implemented on a field-programmable gate array-based platform as a hardware model for predicting glucose changes in subjects with type 1 diabetes mellitus. Compared with the original Cobelli's model, the implemented hardware model provides a nearly perfect approximation in predicting glucose changes with rather small root-mean-square errors and maximum errors. The RMSE results for 30 subjects show that the method for simplifying and implementing Cobelli's model has good robustness and applicability. The successful hardware implementation of Cobelli's model will promote a wider adoption of this model that can substitute animal trials, provide fast and reliable glucose and insulin estimation, and ultimately assist the further development of an artificial pancreas system.

  14. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    OpenAIRE

    Ridi Ferdiana; Paulus Insap Santoso; Lukito Edi Nugroho; Ahmad Ashari

    2011-01-01

    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or s...

  15. Simplification-driven automated partial evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, J.M.

    1992-11-21

    I describe an automated approach to partial evaluation based on simplification and implemented by program transformations. The approach emphasizes program algebra and relies on canonical forms and distributive laws to expose instances to which simplifications can be applied. I discuss some of the considerations that led to the design of this approach. This design discussion should be useful both in understanding the structure of the partial evaluation transformations, and as an example of how to approach the design of automated program transformations in general. This approach to partial evaluation has been applied to a number of practical examples of moderate complexity, including: the running example used in this paper, proving an identity for lists, and eliminating a virtual data structure from a specification of practical interest. The chief practical barrier to its wider application is the growth of the intermediate program text during partial evaluation. Despite this limitation, this approach has the virtues of being implemented, automated, and able to partially evaluate specifications containing implicit data, including some specifications of practical interest.

  16. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...

  17. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  18. Simplification and shift in cognition of political difference: applying the geometric modeling to the analysis of semantic similarity judgment.

    Science.gov (United States)

    Kato, Junko; Okada, Kensuke

    2011-01-01

    Perceiving differences by means of spatial analogies is intrinsic to human cognition. Multi-dimensional scaling (MDS) analysis based on Minkowski geometry has been used primarily on data on sensory similarity judgments, leaving judgments on abstractive differences unanalyzed. Indeed, analysts have failed to find appropriate experimental or real-life data in this regard. Our MDS analysis used survey data on political scientists' judgments of the similarities and differences between political positions expressed in terms of distance. Both distance smoothing and majorization techniques were applied to a three-way dataset of similarity judgments provided by at least seven experts on at least five parties' positions on at least seven policies (i.e., originally yielding 245 dimensions) to substantially reduce the risk of local minima. The analysis found two dimensions, which were sufficient for mapping differences, and fit the city-block dimensions better than the Euclidean metric in all datasets obtained from 13 countries. Most city-block dimensions were highly correlated with the simplified criterion (i.e., the left-right ideology) for differences that are actually used in real politics. The isometry of the city-block and dominance metrics in two-dimensional space carries further implications. More specifically, individuals may pay attention to two dimensions (if represented in the city-block metric) or focus on a single dimension (if represented in the dominance metric) when judging differences between the same objects. Switching between metrics may be expected to occur during cognitive processing as frequently as the apparent discontinuities and shifts in human attention that may underlie changing judgments in real situations occur. Consequently, the result has extended strong support for the validity of the geometric models to represent an important social cognition, i.e., the one of political differences, which is deeply rooted in human nature.

  19. In-Process modeling method of applying blend feature simplification%应用过渡特征简化的工序几何建模方法

    Institute of Scientific and Technical Information of China (English)

    唐健钧; 田锡天; 耿俊浩

    2013-01-01

    To build In-Process model rapidly,an In-Process modeling method was proposed by combining blend feature simplification with boundary extraction of machining feature.Boundary of the machining feature was obtained by simplifying blend features in machining feature,and it was selected to build processing volume characteristic by rotation,sweep,or stretching.Boolean subtraction between the former In-Process model and the processing volume characteristics was operated to obtain In-Process model.Edge blends and vertex blends were distinguish,and the situation of support surface lose was analyzed when blend features simplified.A process of typical shaft parts turning In-Process modeling was provided to analyze the In-Process model's dimension change rule which was built by different boundary of machining feature.The effectiveness of proposed method was verified by examples.%为了快速建立工序几何模型,提出一种将过渡特征简化和加工特征边界提取相结合的工序几何模型建立方法.首先对加工特征中的过渡特征进行简化,获得加工特征边界;然后选择加工特征边界,利用旋转、扫掠或拉伸方法建立加工体积特征;用前一道工序的几何模型与该加工体积特征做布尔差运算,求得本工序的工序几何模型.在简化过渡特征时区分边过渡和点过渡,并综合考虑支持面丢失等情况.以典型轴类零件的车削加工工序几何建模为例,分析了选择不同加工特征边界建立的工序几何模型的尺寸变化规律,验证了该方法的有效性.

  20. Cuckoo Filter: Simplification and Analysis

    OpenAIRE

    Eppstein, David

    2016-01-01

    The cuckoo filter data structure of Fan, Andersen, Kaminsky, and Mitzenmacher (CoNEXT 2014) performs the same approximate set operations as a Bloom filter in less memory, with better locality of reference, and adds the ability to delete elements as well as to insert them. However, until now it has lacked theoretical guarantees on its performance. We describe a simplified version of the cuckoo filter using fewer hash function calls per query. With this simplification, we provide the first theo...

  1. A consistent positive association between landscape simplification and insecticide use across the Midwestern US from 1997 through 2012

    Science.gov (United States)

    Meehan, Timothy D.; Gratton, Claudio

    2015-11-01

    During 2007, counties across the Midwestern US with relatively high levels of landscape simplification (i.e., widespread replacement of seminatural habitats with cultivated crops) had relatively high crop-pest abundances which, in turn, were associated with relatively high insecticide application. These results suggested a positive relationship between landscape simplification and insecticide use, mediated by landscape effects on crop pests or their natural enemies. A follow-up study, in the same region but using different statistical methods, explored the relationship between landscape simplification and insecticide use between 1987 and 2007, and concluded that the relationship varied substantially in sign and strength across years. Here, we explore this relationship from 1997 through 2012, using a single dataset and two different analytical approaches. We demonstrate that, when using ordinary least squares (OLS) regression, the relationship between landscape simplification and insecticide use is, indeed, quite variable over time. However, the residuals from OLS models show strong spatial autocorrelation, indicating spatial structure in the data not accounted for by explanatory variables, and violating a standard assumption of OLS. When modeled using spatial regression techniques, relationships between landscape simplification and insecticide use were consistently positive between 1997 and 2012, and model fits were dramatically improved. We argue that spatial regression methods are more appropriate for these data, and conclude that there remains compelling correlative support for a link between landscape simplification and insecticide use in the Midwestern US. We discuss the limitations of inference from this and related studies, and suggest improved data collection campaigns for better understanding links between landscape structure, crop-pest pressure, and pest-management practices.

  2. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  3. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  4. 离心泵转子动力学模型中流体力的简化%Simplification of Fluid Force in Rotordynamic Model of Centrifugal Pumps

    Institute of Scientific and Technical Information of China (English)

    蒋爱华; 华宏星; 陈长盛; 李国平; 周璞; 章艺

    2014-01-01

    Simplification of the fluid force applied on impeller can significantly raise the accuracy of computation of centrifugal pump vibration incited by the fluid. In this paper, a rotor dynamic model including four discs, three shaft sections and a pump base is built for the workbench based on D'Alembert principle. Then fluid force on the impeller is simplified as 20 % fluid weight in impeller, 40 % fluid weight in impeller, and a concentrated force as well as a torque by CFD respectively. Finally, the transient response analysis is carried out by Newmark-implicit algorithm. The result shows that the base vibration incited by the fluid force during centrifugal pump operation can be effectively gained by simplifying the fluid force on the impeller to a concentrated force and a torque, and the amplitudes of acceleration and displacement of the base vibration by simplifying the fluid force to concentrated force and torque are much larger than those by simplifying the fluid force as 20 % and 40 % fluid weight in the impeller respectively. Also, the acceleration and displacement amplitudes by 40%fluid weight in the impeller are larger than those by 20%fluid weight in the impeller.%采用叶轮流体力的简化方式可以提高离心泵流体激励诱发振动的计算的准确程度。根据达朗伯原理对试验台架建立了包含离心泵基座的四圆盘三轴段转子动力学模型;将流体力分别简化为叶轮内20%流体质量、40%流体质量、CFD集中力与力矩,采用Newmark-隐式算法对转子动力学模型进行瞬态响应分析。结果表明,将叶轮上流体力简化为CFD;所得集中力与力矩时;可有效得出离心泵运转过程中流体激励所诱发的基座振动。而所获得的基座振动位移与加速度幅值均远大于将流体力简化为叶轮内20%或40%流体质量所获得的基座振动数值。另一方面,将流体力简化为叶轮内40%流体质量所获得的基座振动大于简化为叶轮内20

  5. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    When two or more databases are combined into a global one, integrity may be violated even when each database is consistent with its own local integrity constraints. Efficient methods for checking global integrity in data integration systems are called for: answers to queries can then be trusted......, because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...... together with given a priori constraints on the combination, so that only a minimal number of tuples needs to be considered. Combination from scratch, integration of a new source, and absorption of local updates are dealt with for both the local-as-view and global-as-view approaches to data integration....

  6. Simplification Rules for Birdtrack Operators

    CERN Document Server

    Alckock-Zeilinger, Judith

    2016-01-01

    This paper derives a set of easy-to-use tools designed to simplify calculations with birdtrack op- erators comprised of symmetrizers and antisymmetrizers. In particular, we present cancellation rules allowing one to shorten the birdtrack expressions of operators, and propagation rules identifying the circumstances under which it is possible to propagate symmetrizers past antisymmetrizers and vice versa. We exhibit the power of these simplification rules by means of a short example in which we apply the tools derived in this paper on a typical operator that can be encountered in the representation theory of SU(N) over the product space $V^{\\otimes m}$ . These rules form the basis for the construction of compact Hermitian Young projection operators and their transition operators addressed in companion papers.

  7. Three-dimensional modeling of the cochlea by use of an arc fitting approach.

    Science.gov (United States)

    Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S

    2016-12-01

    A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.

  8. Simplifications and Idealizations in High School Physics in Mechanics: A Study of Slovenian Curriculum and Textbooks

    Science.gov (United States)

    Forjan, Matej; Sliško, Josip

    2014-01-01

    This article presents the results of an analysis of three Slovenian textbooks for high school physics, from the point of view of simplifications and idealizations in the field of mechanics. In modeling of physical systems, making simplifications and idealizations is important, since one ignores minor effects and focuses on the most important…

  9. Simplification of Entry Vector by TA Approach%利用TA克隆的方法简便构建入门克隆

    Institute of Scientific and Technical Information of China (English)

    殷宪伦; 王春涛; 孔祥翔; 杨永平; 胡向阳

    2012-01-01

    Gateway technology is a universal cloning approach that enables rapid cloning of DNA fragments into multiple Gateway-compatible destination vectors using A phage site-specific recombination, eliminating the requirement to work with restriction enzymes and ligase. But a problem using this system for making entry clone is the expensive-ness and long time to buy the enzyme. To solve this problem, we created the TA cloning entry vector that contained a T-tail in each 3'-end through modification of pDONR207. The TA cloning approach can construct entry clones simply, economically and rapidly. Using Gateway T vectors prepared by this improved method, prokaryotic expression vector and eukaryotic expression vector for SOS2 gene were constructed. Through the methods of prokaryotic expression and transient gene expression in Arabidopsis protoplasts, it proved that the SOS2 gene expressed well in both prokaryotic cells and eukaryotic cells.%Gateway技术是一种通用型克隆方法,其基于λ噬菌体位点特异性重组,将目的DNA快速克隆到各种与Gateway技术兼容的目的载体上,不需要进行酶切和连接反应.但存在获得入门克隆过程中相关反应酶制剂价格昂贵,且药品订购时间较长等问题.通过对入门载体pDONR207的改造,使之产生3’端具有单个T-末端的线性化的入门载体,采用TA克隆的方法替代BP反应,从而简便、经济和快速地获得入门克隆.利用改造后的Gateway技术构建拟南芥SOS2基因的原核表达载体和真核表达载体,通过原核表达和原生质体瞬时表达证明通过此方法构建的表达载体在原核细胞和真核细胞中都得到了很好的表达.

  10. Three-stage approach for dynamic traffic temporal-spatial model

    Institute of Scientific and Technical Information of China (English)

    陆化普; 孙智源; 屈闻聪

    2016-01-01

    In order to describe the characteristics of dynamic traffic flow and improve the robustness of its multiple applications, a dynamic traffic temporal-spatial model (DTTS) is established. With consideration of the temporal correlation, spatial correlation and historical correlation, a basic DTTS model is built. And a three-stage approach is put forward for the simplification and calibration of the basic DTTS model. Through critical sections pre-selection and critical time pre-selection, the first stage reduces the variable number of the basic DTTS model. In the second stage, variable coefficient calibration is implemented based on basic model simplification and stepwise regression analysis. Aimed at dynamic noise estimation, the characteristics of noise are summarized and an extreme learning machine is presented in the third stage. A case study based on a real-world road network in Beijing, China, is carried out to test the efficiency and applicability of proposed DTTS model and the three-stage approach.

  11. Monte Carlo modelling of diode detectors for small field MV photon dosimetry: detector model simplification and the sensitivity of correction factors to source parameterization.

    Science.gov (United States)

    Cranmer-Sargison, G; Weston, S; Evans, J A; Sidhu, N P; Thwaites, D I

    2012-08-21

    The goal of this work was to examine the use of simplified diode detector models within a recently proposed Monte Carlo (MC) based small field dosimetry formalism and to investigate the influence of electron source parameterization has on MC calculated correction factors. BEAMnrc was used to model Varian 6 MV jaw-collimated square field sizes down to 0.5 cm. The IBA stereotactic field diode (SFD), PTW T60016 (shielded) and PTW T60017 (un-shielded) diodes were modelled in DOSRZnrc and isocentric output ratios (OR(fclin)(detMC)) calculated at depths of d = 1.5, 5.0 and 10.0 cm. Simplified detector models were then tested by evaluating the percent difference in (OR(fclin)(detMC)) between the simplified and complete detector models. The influence of active volume dimension on simulated output ratio and response factor was also investigated. The sensitivity of each MC calculated replacement correction factor (k(fclin,fmsr)(Qclin,Qmsr)), as a function of electron FWHM between 0.100 and 0.150 cm and energy between 5.5 and 6.5 MeV, was investigated for the same set of small field sizes using the simplified detector models. The SFD diode can be approximated simply as a silicon chip in water, the T60016 shielded diode can be modelled as a chip in water plus the entire shielding geometry and the T60017 unshielded diode as a chip in water plus the filter plate located upstream. The detector-specific (k(fclin,fmsr)(Qclin,Qmsr)), required to correct measured output ratios using the SFD, T60016 and T60017 diode detectors are insensitive to incident electron energy between 5.5 and 6.5 MeV and spot size variation between FWHM = 0.100 and 0.150 cm. Three general conclusions come out of this work: (1) detector models can be simplified to produce OR(fclin)(detMC) to within 1.0% of those calculated using the complete geometry, where typically not only the silicon chip, but also any high density components close to the chip, such as scattering plates or shielding material is necessary

  12. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  13. Complexity and simplification in understanding recruitment in benthic populations

    KAUST Repository

    Pineda, Jesús

    2008-11-13

    Research of complex systems and problems, entities with many dependencies, is often reductionist. The reductionist approach splits systems or problems into different components, and then addresses these components one by one. This approach has been used in the study of recruitment and population dynamics of marine benthic (bottom-dwelling) species. Another approach examines benthic population dynamics by looking at a small set of processes. This approach is statistical or model-oriented. Simplified approaches identify "macroecological" patterns or attempt to identify and model the essential, "first-order" elements of the system. The complexity of the recruitment and population dynamics problems stems from the number of processes that can potentially influence benthic populations, including (1) larval pool dynamics, (2) larval transport, (3) settlement, and (4) post-settlement biotic and abiotic processes, and larval production. Moreover, these processes are non-linear, some interact, and they may operate on disparate scales. This contribution discusses reductionist and simplified approaches to study benthic recruitment and population dynamics of bottom-dwelling marine invertebrates. We first address complexity in two processes known to influence recruitment, larval transport, and post-settlement survival to reproduction, and discuss the difficulty in understanding recruitment by looking at relevant processes individually and in isolation. We then address the simplified approach, which reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often results in mistaken inferences, and processes and parameters used in some models can bias our view of processes influencing recruitment. We conclude with a discussion on how to reconcile complex and simplified approaches. Although it

  14. Quadratic Error Metric Mesh Simplification Algorithm Based on Discrete Curvature

    Directory of Open Access Journals (Sweden)

    Li Yao

    2015-01-01

    Full Text Available Complex and highly detailed polygon meshes have been adopted for model representation in many areas of computer graphics. Existing works mainly focused on the quadric error metric based complex models approximation, which has not taken the retention of important model details into account. This may lead to visual degeneration. In this paper, we improve Garland and Heckberts’ quadric error metric based algorithm by using the discrete curvature to reserve more features for mesh simplification. Our experiments on various models show that the geometry and topology structure as well as the features of the original models are precisely retained by employing discrete curvature.

  15. A systemic approach for modeling biological evolution using Parallel DEVS.

    Science.gov (United States)

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. WORK SIMPLIFICATION FOR PRODUCTIVITY IMPROVEMENT A ...

    African Journals Online (AJOL)

    press concerning the work simplification techniques state ... and social development and its importance as a source of ..... data sheets as per the training given by the authors at site [5]. ... As recorded in the 1993 E.C. budget year Armual.

  17. Simplification Study of FE Model for 1000kV AC Transmission Line Insulator String Voltage and Grading Ring Surface Electric Field Distribution Calculation

    Directory of Open Access Journals (Sweden)

    Guoli Wang

    2013-09-01

    Full Text Available The finite element model of the 1000kV Ultra High Voltage (UHV AC transmission line porcelain insulator string voltage distribution and grading ring surface electric field distribution calculation has the characteristics of large size, complicated structure and various mediums. To insure the accuracy, related influencing factors should be considered to simplify the model reasonably for improving computational efficiency. A whole model and a simplified 3D finite element model of UHV AC transmission line porcelain insulator string were built. The influencing factors including tower, phase conductors, hardware fittings, yoke plate and phase interaction were considered in the analysis. And finally, the rationality of the simplified model was validated. The results comparison show that building a simplified model of three-phase bundled conductors within a certain length, simplifying the tower reasonably, omitting the hardware fittings and yoke plate and containing only single-phase insulator string model is feasible. The simplified model could replace the whole model to analyze the voltage distribution along the porcelain insulator string and the electric field distribution on the grading ring surface, and it can reduce the calculation scale, improve optimization efficiency of insulators string and grading ring parameters.

  18. Strategy-Enhanced Interactive Proving and Arithmetic Simplification for PVS

    Science.gov (United States)

    diVito, Ben L.

    2003-01-01

    We describe an approach to strategy-based proving for improved interactive deduction in specialized domains. An experimental package of strategies (tactics) and support functions called Manip has been developed for PVS to reduce the tedium of arithmetic manipulation. Included are strategies aimed at algebraic simplification of real-valued expressions. A general deduction architecture is described in which domain-specific strategies, such as those for algebraic manipulation, are supported by more generic features, such as term-access techniques applicable in arbitrary settings. An extended expression language provides access to subterms within a sequent.

  19. Generation and simplification of software Markov chain usage model%软件Markov链使用模型生成与化简技术

    Institute of Scientific and Technical Information of China (English)

    冯俊池; 于磊; 刘洋

    2015-01-01

    为解决软件可靠性测试中 Markov链使用模型的状态空间爆炸问题,研究基于 UML 模型的使用模型生成与化简技术。基于 UML模型中的顺序图获取软件与外部环境之间的消息交互,通过分析激励与响应消息来获取状态生成软件Markov链使用模型,准确描述软件的使用情况。针对状态空间爆炸问题,提出冗余状态和等价状态的定义,设计使用模型化简算法,针对化简算法给出相关理论证明。实验结果表明了该方法的有效性。%To solve the state space explosion problem of Markov chain usage model in the software reliability testing,the tech-nology to generate and simplify usage model based on UML model was studied.Based on the sequence diagram of UML model, the messages between software and environment were derived.Based on the stimulus and response messages,the states of usage model were derived.The usage model described the usage of software accurately.After analyzing the state space explosion prob-lem,the concepts of equivalent states and redundant states were defined.An algorithm to simplify the state space was proposed. Related theoretical proof was given.Finally,the effectiveness of the proposed method was verified by experiments.

  20. Influence of vocal tract geometry simplifications on the numerical simulation of vowel sounds.

    Science.gov (United States)

    Arnela, Marc; Dabbaghchian, Saeed; Blandin, Rémi; Guasch, Oriol; Engwall, Olov; Van Hirtum, Annemie; Pelorson, Xavier

    2016-09-01

    For many years, the vocal tract shape has been approximated by one-dimensional (1D) area functions to study the production of voice. More recently, 3D approaches allow one to deal with the complex 3D vocal tract, although area-based 3D geometries of circular cross-section are still in use. However, little is known about the influence of performing such a simplification, and some alternatives may exist between these two extreme options. To this aim, several vocal tract geometry simplifications for vowels [ɑ], [i], and [u] are investigated in this work. Six cases are considered, consisting of realistic, elliptical, and circular cross-sections interpolated through a bent or straight midline. For frequencies below 4-5 kHz, the influence of bending and cross-sectional shape has been found weak, while above these values simplified bent vocal tracts with realistic cross-sections are necessary to correctly emulate higher-order mode propagation. To perform this study, the finite element method (FEM) has been used. FEM results have also been compared to a 3D multimodal method and to a classical 1D frequency domain model.

  1. Simplification of the unified gas kinetic scheme

    Science.gov (United States)

    Chen, Songze; Guo, Zhaoli; Xu, Kun

    2016-08-01

    The unified gas kinetic scheme (UGKS) is an asymptotic preserving (AP) scheme for kinetic equations. It is superior for transition flow simulation and has been validated in the past years. However, compared to the well-known discrete ordinate method (DOM), which is a classical numerical method solving the kinetic equations, the UGKS needs more computational resources. In this study, we propose a simplification of the unified gas kinetic scheme. It allows almost identical numerical cost as the DOM, but predicts numerical results as accurate as the UGKS. In the simplified scheme, the numerical flux for the velocity distribution function and the numerical flux for the macroscopic conservative quantities are evaluated separately. The equilibrium part of the UGKS flux is calculated by analytical solution instead of the numerical quadrature in velocity space. The simplification is equivalent to a flux hybridization of the gas kinetic scheme for the Navier-Stokes (NS) equations and the conventional discrete ordinate method. Several simplification strategies are tested, through which we can identify the key ingredient of the Navier-Stokes asymptotic preserving property. Numerical tests show that, as long as the collision effect is built into the macroscopic numerical flux, the numerical scheme is Navier-Stokes asymptotic preserving, regardless the accuracy of the microscopic numerical flux for the velocity distribution function.

  2. 网络环境下道路三维整体建模与简化方法%Integrated model construction and simplification methods for Web 3D road

    Institute of Scientific and Technical Information of China (English)

    蒲浩; 李伟; 赵海峰

    2013-01-01

    In order to realize the Web 3D visualization of road engineering, the key technologies such as the road integrated 3D model construction and simplification methods concerning constraints were studied. Based on the theory of constrained Delaunay triangulation, the road 3D model with integrated appearance and inner topological relationship was created. A half-edge collapse error metric concerning road constrained edges was proposed. Based on it, original road model was integrated and simplified by half-edge collapse and operating hierarchical tree was built to store operation records on the server The view-dependent strategy was put forward, in which the constrained edges were refined preferentially and simplified afterwards. Combined with the view-dependent reconstruction criterions, the transmission data for 3D visualization was substantially reduced and fast view-dependent reconstruction of the road 3D model was realized on the client. Given the benefits from above methods, a relevant system was developed out and applied to many highways Web-based construction management successfully.%为实现网络环境下道路工程的三维可视化,对其中的关键技术:顾及约束的整体模型构建及模型简化方法进行了研究.基于约束Delaunay三角网构建理论,建立了外形与内部拓扑关系均为整体的道路三维模型.提出了顾及道路约束边界的半边折叠误差度量方法,采用半边折叠操作,在服务器端对道路模型进行整体简化,并建立操作层次树存储操作记录;提出了约束边优先细化,延迟简化的视相关策略,结合视相关重构准则,减少网络可视化所需传输的数据量,在客户端实现了道路三维模型的快速重构.基于上述原理方法开发了相关系统,已在高速公路的网络建设管理中成功应用.

  3. Structural simplification of chemical reaction networks in partial steady states.

    Science.gov (United States)

    Madelaine, Guillaume; Lhoussaine, Cédric; Niehren, Joachim; Tonello, Elisa

    2016-11-01

    We study the structural simplification of chemical reaction networks with partial steady state semantics assuming that the concentrations of some but not all species are constant. We present a simplification rule that can eliminate intermediate species that are in partial steady state, while preserving the dynamics of all other species. Our simplification rule can be applied to general reaction networks with some but few restrictions on the possible kinetic laws. We can also simplify reaction networks subject to conservation laws. We prove that our simplification rule is correct when applied to a module of a reaction network, as long as the partial steady state is assumed with respect to the complete network. Michaelis-Menten's simplification rule for enzymatic reactions falls out as a special case. We have implemented an algorithm that applies our simplification rules repeatedly and applied it to reaction networks from systems biology.

  4. Simplification of irreversible Markov chains by removal of states with fast leaving rates.

    Science.gov (United States)

    Jia, Chen

    2016-07-07

    In the recent work of Ullah et al. (2012a), the authors developed an effective method to simplify reversible Markov chains by removal of states with low equilibrium occupancies. In this paper, we extend this result to irreversible Markov chains. We show that an irreversible chain can be simplified by removal of states with fast leaving rates. Moreover, we reveal that the irreversibility of the chain will always decrease after model simplification. This suggests that although model simplification can retain almost all the dynamic information of the chain, it will lose some thermodynamic information as a trade-off. Examples from biology are also given to illustrate the main results of this paper.

  5. Generalized Topological Simplification of Scalar Fields on Surfaces.

    Science.gov (United States)

    Tierny, J; Pascucci, V

    2012-12-01

    We present a combinatorial algorithm for the general topological simplification of scalar fields on surfaces. Given a scalar field f, our algorithm generates a simplified field g that provably admits only critical points from a constrained subset of the singularities of f, while guaranteeing a small distance ||f - g||∞ for data-fitting purpose. In contrast to previous algorithms, our approach is oblivious to the strategy used for selecting features of interest and allows critical points to be removed arbitrarily. When topological persistence is used to select the features of interest, our algorithm produces a standard ϵ-simplification. Our approach is based on a new iterative algorithm for the constrained reconstruction of sub- and sur-level sets. Extensive experiments show that the number of iterations required for our algorithm to converge is rarely greater than 2 and never greater than 5, yielding O(n log(n)) practical time performances. The algorithm handles triangulated surfaces with or without boundary and is robust to the presence of multi-saddles in the input. It is simple to implement, fast in practice and more general than previous techniques. Practically, our approach allows a user to arbitrarily simplify the topology of an input function and robustly generate the corresponding simplified function. An appealing application area of our algorithm is in scalar field design since it enables, without any threshold parameter, the robust pruning of topological noise as selected by the user. This is needed for example to get rid of inaccuracies introduced by numerical solvers, thereby providing topological guarantees needed for certified geometry processing. Experiments show this ability to eliminate numerical noise as well as validate the time efficiency and accuracy of our algorithm. We provide a lightweight C++ implementation as supplemental material that can be used for topological cleaning on surface meshes.

  6. Multi-physics modelling approach for oscillatory microengines: application for a microStirling generator design

    Science.gov (United States)

    Formosa, F.; Fréchette, L. G.

    2015-12-01

    An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).

  7. OPC mask simplification using over-designed timing slack of standard cells

    Science.gov (United States)

    Qu, Yifan; Heng, Chun Huat; Tay, Arthur; Lee, Tong Heng

    2013-05-01

    It is well known that VLSI circuits must be designed to sustain the variations in process, voltage, temperature, etc. As a result, standard cell libraries (collections of the basic circuit components) are usually designed with large margin (also known as "timing slack"). However, in circuit manufacturing, only part of the margin will be utilized. The knowledge of the rest of the margin (over-designed timing slack), armed with models that link between timing domain and shape domain, can help to reduce the complexity of mask patterns and manufacturing cost. This paper proposed a novel methodology to simplify mask patterns in optical proximity correction (OPC) by using extra margin in timing (over-designed timing slack). This methodology can be applied after a conventional OPC, and is compatible with the current application-specific integrated circuit (ASIC) design flow. This iterative method is applied to each occurrence of over-designed timing slack. The actual value of timing slack can be estimated from post-OPC simulation. A timing cost function is developed in this work to map timing slack in timing domain to mask patterns in shape domain. This enables us to adjust mask patterns selectively based on the outcome of the cost function. All related mask patterns with over-designed timing slack will be annotated and simplified using our proposed mask simplification algorithm, which is in fact to merge the nearby edge fragments on the mask patterns. Simulations are conducted on a standard cell library and a full chip design to validate this proposed approach. When compared to existing OPC methods without mask simplification in the literature, our approach achieved a 51% reduction in mask fragment count, and this directly leads to a large saving in lithography manufacturing cost. The result also shows that timing closure is ensured, though part of the timing slack has been sacrificed.

  8. A numerical experiment on tidal river simplification in simulation of tide dominated estuaries

    Science.gov (United States)

    Yin, X.; Jia, L.; Zhu, L.

    2017-04-01

    In numerical simulation of tide dominated estuaries, introduction of simplified tidal channels into the model for real rivers is one of the strategies to deal with the lack of topographic data. To understand the effects of simplification and their sensitivity to the simplifying parameters, a numerical experiment was conducted to test the parameters such as channel length L, surface width B, bed slope S, bottom elevation ▽0, bed roughness n and run-off Qr. The results indicated the values of those parameters which were liable to less tidal prism and greater flood resistance would result in larger simulation errors. For a better simplification the values of parameters for the channel geometry, resistance and upstream inflow needed to be consistent with the average of the natural river as much as possible. The simplification method made the computation stable, fast and saved the storage space and it was adoptable for different time periods and seasons.

  9. 77 FR 66361 - Reserve Requirements of Depository Institutions: Reserves Simplification

    Science.gov (United States)

    2012-11-05

    ... AD 83 Reserve Requirements of Depository Institutions: Reserves Simplification AGENCY: Board of... Regulation D (Reserve Requirements of Depository Institutions) published in the Federal Register on April 12... simplifications related to the administration of reserve requirements: 1. Create a common two-week...

  10. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation.

    Directory of Open Access Journals (Sweden)

    Simone D Langhans

    Full Text Available River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes, elicited the experts´ preferences for objectives on a standardized scale (value functions and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral, depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and

  11. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation.

    Science.gov (United States)

    Langhans, Simone D; Lienert, Judit

    2016-01-01

    River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource

  12. Quantum copying and simplification of the quantum Fourier transform

    Science.gov (United States)

    Niu, Chi-Sheng

    Theoretical studies of quantum computation and quantum information theory are presented in this thesis. Three topics are considered: simplification of the quantum Fourier transform in Shor's algorithm, optimal eavesdropping in the BB84 quantum cryptographic protocol, and quantum copying of one qubit. The quantum Fourier transform preceding the final measurement in Shor's algorithm is simplified by replacing a network of quantum gates with one that has fewer and simpler gates controlled by classical signals. This simplification results from an analysis of the network using the consistent history approach to quantum mechanics. The optimal amount of information which an eavesdropper can gain, for a given level of noise in the communication channel, is worked out for the BB84 quantum cryptographic protocol. The optimal eavesdropping strategy is expressed in terms of various quantum networks. A consistent history analysis of these networks using two conjugate quantum bases shows how the information gain in one basis influences the noise level in the conjugate basis. The no-cloning property of quantum systems, which is the physics behind quantum cryptography, is studied by considering copying machines that generate two imperfect copies of one qubit. The best qualities these copies can have are worked out with the help of the Bloch sphere representation for one qubit, and a quantum network is worked out for an optimal copying machine. If the copying machine does not have additional ancillary qubits, the copying process can be viewed using a 2-dimensional subspace in a product space of two qubits. A special representation of such a two-dimensional subspace makes possible a complete characterization of this type of copying. This characterization in turn leads to simplified eavesdropping strategies in the BB84 and the B92 quantum cryptographic protocols.

  13. The complexities of HIPAA and administration simplification.

    Science.gov (United States)

    Mozlin, R

    2000-11-01

    The Health Insurance Portability and Accessibility Act (HIPAA) was signed into law in 1996. Although focused on information technology issues, HIPAA will ultimately impact day-to-day operations at multiple levels within any clinical setting. Optometrists must begin to familiarize themselves with HIPAA in order to prepare themselves to practice in a technology-enriched environment. Title II of HIPAA, entitled "Administration Simplification," is intended to reduce the costs and administrative burden of healthcare by standardizing the electronic transmission of administrative and financial transactions. The Department of Health and Human Services is expected to publish the final rules and regulations that will govern HIPAA's implementation this year. The rules and regulations will cover three key aspects of healthcare delivery: electronic data interchange (EDI), security and privacy. EDI will standardize the format for healthcare transactions. Health plans must accept and respond to all transactions in the EDI format. Security refers to policies and procedures that protect the accuracy and integrity of information and limit access. Privacy focuses on how the information is used and disclosure of identifiable health information. Security and privacy regulations apply to all information that is maintained and transmitted in a digital format and require administrative, physical, and technical safeguards. HIPAA will force the healthcare industry to adopt an e-commerce paradigm and provide opportunities to improve patient care processes. Optometrists should take advantage of the opportunity to develop more efficient and profitable practices.

  14. A Data-Driven Point Cloud Simplification Framework for City-Scale Image-Based Localization.

    Science.gov (United States)

    Cheng, Wentao; Lin, Weisi; Zhang, Xinfeng; Goesele, Michael; Sun, Ming-Ting

    2017-01-01

    City-scale 3D point clouds reconstructed via structure-from-motion from a large collection of Internet images are widely used in the image-based localization task to estimate a 6-DOF camera pose of a query image. Due to prohibitive memory footprint of city-scale point clouds, image-based localization is difficult to be implemented on devices with limited memory resources. Point cloud simplification aims to select a subset of points to achieve a comparable localization performance using the original point cloud. In this paper, we propose a data-driven point cloud simplification framework by taking it as a weighted K-Cover problem, which mainly includes two complementary parts. First, a utility-based parameter determination method is proposed to select a reasonable parameter K for K-Cover-based approaches by evaluating the potential of a point cloud for establishing sufficient 2D-3D feature correspondences. Second, we formulate the 3D point cloud simplification problem as a weighted K-Cover problem, and propose an adaptive exponential weight function based on the visibility probability of 3D points. The experimental results on three popular datasets demonstrate that the proposed point cloud simplification framework outperforms the state-of-the-art methods for the image-based localization application with a well predicted parameter in the K-Cover problem.

  15. Organisational simplification and secondary complexity in health services for adults with learning disabilities.

    Science.gov (United States)

    Heyman, Bob; Swain, John; Gillman, Maureen

    2004-01-01

    This paper explores the role of complexity and simplification in the delivery of health care for adults with learning disabilities, drawing upon qualitative data obtained in a study carried out in NE England. It is argued that the requirement to manage complex health needs with limited resources causes service providers to simplify, standardise and routinise care. Simplified service models may work well enough for the majority of clients, but can impede recognition of the needs of those whose characteristics are not congruent with an adopted model. The data were analysed in relation to the core category, identified through thematic analysis, of secondary complexity arising from organisational simplification. Organisational simplification generates secondary complexity when operational routines designed to make health complexity manageable cannot accommodate the needs of non-standard service users. Associated themes, namely the social context of services, power and control, communication skills, expertise and service inclusiveness and evaluation are explored in relation to the core category. The concept of secondary complexity resulting from organisational simplification may partly explain seemingly irrational health service provider behaviour.

  16. Linguistic Simplification: A Promising Test Accommodation for LEP Students?

    Directory of Open Access Journals (Sweden)

    Charles W. Stansfield

    2002-07-01

    Full Text Available This article is a synopsis of an experimental study of the effects of linguistic simplification, a test accommodation..designed for LEP students. Conducted as part of Delaware's statewide assessment program, this study examined the..effects of linguistic simplification of fourth- and sixth-grade science test items and specifically looked at score..comparability between LEP and non-LEP examinees.

  17. New technique for system simplification using Cuckoo search and ESA

    Indian Academy of Sciences (India)

    AFZAL SIKANDER; RAJENDRA PRASAD

    2017-09-01

    In this study, a new technique is suggested for simplification of linear time-invariant systems.Motivated by optimization and various system simplification techniques available in the literature, the proposed technique is formulated using Cuckoo search in combination with Le´vy flight and Eigen spectrum analysis. Theefficacy and powerfulness of the new technique is illustrated by three benchmark systems considered from previously published work and the results are compared in terms of performance indices.

  18. Interval Methods for Model Qualification: Methodology and Advanced Application

    OpenAIRE

    Alexandre dit Sandretto, Julien; Trombettoni, Gilles; Daney, David

    2012-01-01

    It is often too complex to use, and sometimes impossible to obtain, an actual model in simulation or command field . To handle a system in practice, a simplification of the real model is then necessary. This simplification goes through some hypotheses made on the system or the modeling approach. In this paper, we deal with all models that can be expressed by real-valued variables involved in analytical relations and depending on parameters. We propose a method that qualifies the simplificatio...

  19. Impact of pipes networks simplification on water hammer phenomenon

    Indian Academy of Sciences (India)

    Ali A M Gad; Hassan I Mohammed

    2014-10-01

    Simplification of water supply networks is an indispensible design step to make the original network easier to be analysed. The impact of networks’ simplification on water hammer phenomenon is investigated. This study uses two loops network with different diameters, thicknesses, and roughness coefficients. The network is fed from a boundary head reservoir and loaded by either distributed or concentrated boundary water demands. According to both hydraulic and hydraulic plus water quality equivalence, three simplification levels are performed. The effect of demands’ concentration on the transient flow is checked. The transient flow is initialized by either concentrated or distributed boundary demands which are suddenly shut-off or released. WHAMO software is used for simulation. All scenarios showed that both hydraulic equivalence and demands’ concentration simplifications increase the transient pressure and flow rate. However, hydraulic plus water quality equivalence simplification produces an adverse effect. Therefore, simplifications of the networks should be done carefully. Also, it was found that pump shut-off gives the same trend of valve shut-off or release.

  20. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  1. A user-study measuring the effects of lexical simplification and coherence enhancement on perceived and actual text difficulty.

    Science.gov (United States)

    Leroy, Gondy; Kauchak, David; Mouradi, Obay

    2013-08-01

    Low patient health literacy has been associated with cost increases in medicine because it contributes to inadequate care. Providing explanatory text is a convenient approach to distribute medical information and increase health literacy. Unfortunately, writing text that is easily understood is challenging. This work tests two text features for their impact on understanding: lexical simplification and coherence enhancement. A user study was conducted to test the features' effect on perceived and actual text difficulty. Individual sentences were used to test perceived difficulty. Using a 5-point Likert scale, participants compared eight pairs of original and simplified sentences. Abstracts were used to test actual difficulty. For each abstract, four versions were created: original, lexically simplified, coherence enhanced, and lexically simplified and coherence enhanced. Using a mixed design, one group of participants worked with the original and lexically simplified documents (no coherence enhancement) while a second group worked with the coherence enhanced versions. Actual difficulty was measured using a Cloze measure and multiple-choice questions. Using Amazon's Mechanical Turk, 200 people participated of which 187 qualified based on our data qualification tests. A paired-samples t-test for the sentence ratings showed a significant reduction in difficulty after lexical simplification (plexical simplification, with the simplification leading to worse scores (p=.004). A follow-up ANOVA showed this effect exists only for function words when coherence was not enhanced (p=.008). In contrast, a two-way ANOVA for answering multiple-choice questions showed a significant beneficial effect of coherence enhancement (p=.003) but no effect of lexical simplification. Lexical simplification reduced the perceived difficulty of texts. Coherence enhancement reduced the actual difficulty of text when measured using multiple-choice questions. However, the Cloze measure results

  2. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  3. Simplification of arboreal marsupial assemblages in response to increasing urbanization.

    Directory of Open Access Journals (Sweden)

    Bronwyn Isaac

    Full Text Available Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.

  4. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  5. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  6. Moving Beyond Readability Metrics for Health-Related Text Simplification.

    Science.gov (United States)

    Kauchak, David; Leroy, Gondy

    2016-01-01

    Limited health literacy is a barrier to understanding health information. Simplifying text can reduce this barrier and possibly other known disparities in health. Unfortunately, few tools exist to simplify text with demonstrated impact on comprehension. By leveraging modern data sources integrated with natural language processing algorithms, we are developing the first semi-automated text simplification tool. We present two main contributions. First, we introduce our evidence-based development strategy for designing effective text simplification software and summarize initial, promising results. Second, we present a new study examining existing readability formulas, which are the most commonly used tools for text simplification in healthcare. We compare syllable count, the proxy for word difficulty used by most readability formulas, with our new metric 'term familiarity' and find that syllable count measures how difficult words 'appear' to be, but not their actual difficulty. In contrast, term familiarity can be used to measure actual difficulty.

  7. Decomposition and Simplification of Multivariate Data using Pareto Sets.

    Science.gov (United States)

    Huettenberger, Lars; Heine, Christian; Garth, Christoph

    2014-12-01

    Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.

  8. Sentence Simplification Aids Protein-Protein Interaction Extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2010-01-01

    Accurate systems for extracting Protein-Protein Interactions (PPIs) automatically from biomedical articles can help accelerate biomedical research. Biomedical Informatics researchers are collaborating to provide metaservices and advance the state-of-art in PPI extraction. One problem often neglected by current Natural Language Processing systems is the characteristic complexity of the sentences in biomedical literature. In this paper, we report on the impact that automatic simplification of sentences has on the performance of a state-of-art PPI extraction system, showing a substantial improvement in recall (8%) when the sentence simplification method is applied, without significant impact to precision.

  9. Projective Market Model Approach to AHP Decision-Making

    CERN Document Server

    Szczypinska, Anna

    2007-01-01

    In this paper we describe market in projective geometry language and give definition of a matrix of market rate, which is related to the matrix rate of return and the matrix of judgements in the Analytic Hierarchy Process (AHP). We use these observations to extend the AHP model to projective geometry formalism and generalise it to intransitive case. We give financial interpretations of such generalised model in the Projective Model of Market (PMM) and propose its simplification. The unification of the AHP model and projective aspect of portfolio theory suggests a wide spectrum of new applications such extended model.

  10. Simplification of iron speciation in wine samples: a spectrophotometric approach.

    Science.gov (United States)

    López-López, José A; Albendín, Gemma; Arufe, María I; Mánuel-Vez, Manuel P

    2015-05-13

    A simple direct spectrophotometric method was developed for the analysis of Fe(II) and total Fe in wine samples. This method is based on the formation of an Fe(II) complex with 2,2'-dipyridylketone picolinoylhydrazone (DPKPH), which shows a maximum green-blue absorption (λ = 700 nm) at pH 4.9. Operative conditions for the batch procedure were investigated including reagent concentration, buffer solutions, and wavelength. The tolerance limits of foreign ions and sample matrix have been also evaluated. Limits of detection and quantification were 0.005 and 0.017 mg L(-1) of Fe(II), respectively, allowing its determination in real wine samples. Finally, the proposed method was used in the analysis of white, rose, and red wines. Results were compared with a reference method of Commission Regulation (ECC) No. 2676/90 of September 1990 determining European Community methods for the analysis of wines for Fe analysis, showing the reliability of the proposed method in Fe analysis in wine samples.

  11. Hydraulic Modeling of Lock Approaches

    Science.gov (United States)

    2016-08-01

    cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two

  12. LP Approach to Statistical Modeling

    OpenAIRE

    Mukhopadhyay, Subhadeep; Parzen, Emanuel

    2014-01-01

    We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...

  13. Simplification in Graded Readers: Measuring the Authenticity of Graded Texts

    Science.gov (United States)

    Claridge, Gillian

    2005-01-01

    This study examines the characteristics and quality of simplification in graded readers as compared to those of "normal" authentic English. Two passages from graded readers are compared with the original passages. The comparison uses a computer programme, RANGE (Nation and Heatley, 2003) to analyse the distribution of high and low frequency words…

  14. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    When two or more databases are combined into a global one, integrity may be violated even when each database is consistent with its own local integrity constraints. Efficient methods for checking global integrity in data integration systems are called for: answers to queries can then be trusted...... together with given a priori constraints on the combination, so that only a minimal number of tuples needs to be considered. Combination from scratch, integration of a new source, and absorption of local updates are dealt with for both the local-as-view and global-as-view approaches to data integration....

  15. Approaches to Modeling of Recrystallization

    Directory of Open Access Journals (Sweden)

    Håkan Hallberg

    2011-10-01

    Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.

  16. Elaboration and Simplification in Spanish Discourse

    Science.gov (United States)

    Granena, Gisela

    2008-01-01

    This article compares spoken discourse models in Spanish as a second language textbooks and online language learning resources with naturally occurring conversations. Telephone service encounters are analyzed from the point of view of three different dimensions of authenticity: linguistic, sociolinguistic, and psycholinguistic. An analysis of 20…

  17. Simplification of Methods for PET Radiopharmaceutical Syntheses

    Energy Technology Data Exchange (ETDEWEB)

    Kilbourn, Michael, R.

    2011-12-27

    In an attempt to develop simplified methods for radiochemical synthesis of radiopharmaceuticals useful in Positron Emission Tomography (PET), current commercially available automated synthesis apparati were evaluated for use with solid phase synthesis, thin-film techniques, microwave-accelerated chemistry, and click chemistry approaches. Using combinations of these techniques, it was shown that these automated synthesis systems can be simply and effectively used to support the synthesis of a wide variety of carbon-11 and fluorine-18 labeled compounds, representing all of the major types of compounds synthesized and using all of the common radiochemical precursors available. These techniques are available for use to deliver clinically useful amounts of PET radiopharmaceuticals with chemical and radiochemical purities and high specific activities, suitable for human administration.

  18. A simplification of the unified gas kinetic scheme

    CERN Document Server

    Chen, Songze; Xu, Kun

    2016-01-01

    Unified gas kinetic scheme (UGKS) is an asymptotic preserving scheme for the kinetic equations. It is superior for transition flow simulations, and has been validated in the past years. However, compared to the well known discrete ordinate method (DOM) which is a classical numerical method solving the kinetic equations, the UGKS needs more computational resources. In this study, we propose a simplification of the unified gas kinetic scheme. It allows almost identical numerical cost as the DOM, but predicts numerical results as accurate as the UGKS. Based on the observation that the equilibrium part of the UGKS fluxes can be evaluated analytically, the equilibrium part in the UGKS flux is not necessary to be discretized in velocity space. In the simplified scheme, the numerical flux for the velocity distribution function and the numerical flux for the macroscopic conservative quantities are evaluated separately. The simplification is equivalent to a flux hybridization of the gas kinetic scheme for the Navier-S...

  19. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  20. Simplification of Training Data for Cross-Project Defect Prediction

    OpenAIRE

    He, Peng; Li, Bing; Zhang, Deguang; Ma, Yutao

    2014-01-01

    Cross-project defect prediction (CPDP) plays an important role in estimating the most likely defect-prone software components, especially for new or inactive projects. To the best of our knowledge, few prior studies provide explicit guidelines on how to select suitable training data of quality from a large number of public software repositories. In this paper, we have proposed a training data simplification method for practical CPDP in consideration of multiple levels of granularity and filte...

  1. Stand management optimization – the role of simplifications

    Directory of Open Access Journals (Sweden)

    Timo Pukkala

    2014-02-01

    Full Text Available Background Studies on optimal stand management often make simplifications or restrict the choice of treatments. Examples of simplifications are neglecting natural regeneration that appears on a plantation site, omitting advance regeneration in simulations, or restricting thinning treatments to low thinning (thinning from below. Methods This study analyzed the impacts of simplifications on the optimization results for Fennoscandian boreal forests. Management of pine and spruce plantations was optimized by gradually reducing the number of simplifying assumptions. Results Forced low thinning, cleaning the plantation from the natural regeneration of mixed species and ignoring advance regeneration all had a major impact on optimization results. High thinning (thinning from above resulted in higher NPV and longer rotation length than thinning from below. It was profitable to leave a mixed stand in the tending treatment of young plantation. When advance regeneration was taken into account, it was profitable to increase the number of thinnings and postpone final felling. In the optimal management, both pine and spruce plantation was gradually converted into uneven-aged mixture of spruce and birch. Conclusions The results suggest that, with the current management costs and timber price level, it may be profitable to switch to continuous cover management on medium growing sites of Fennoscandian boreal forests.

  2. A Graph-Based Min-# and Error-Optimal Trajectory Simplification Algorithm and Its Extension towards Online Services

    Directory of Open Access Journals (Sweden)

    Fan Wu

    2017-01-01

    Full Text Available Trajectory simplification has become a research hotspot since it plays a significant role in the data preprocessing, storage, and visualization of many offline and online applications, such as online maps, mobile health applications, and location-based services. Traditional heuristic-based algorithms utilize greedy strategy to reduce time cost, leading to high approximation error. An Optimal Trajectory Simplification Algorithm based on Graph Model (OPTTS is proposed to obtain the optimal solution in this paper. Both min-# and min-ε problems are solved by the construction and regeneration of the breadth-first spanning tree and the shortest path search based on the directed acyclic graph (DAG. Although the proposed OPTTS algorithm can get optimal simplification results, it is difficult to apply in real-time services due to its high time cost. Thus, a new Online Trajectory Simplification Algorithm based on Directed Acyclic Graph (OLTS is proposed to deal with trajectory stream. The algorithm dynamically constructs the breadth-first spanning tree, followed by real-time minimizing approximation error and real-time output. Experimental results show that OPTTS reduces the global approximation error by 82% compared to classical heuristic methods, while OLTS reduces the error by 77% and is 32% faster than the traditional online algorithm. Both OPTTS and OLTS have leading superiority and stable performance on different datasets.

  3. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  4. Memory Insensitive Simplification for View-Dependent Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P

    2002-04-03

    We present an algorithm for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a level-of-detail hierarchy; and (3) run-time, output sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time component relies on memory mapping to page in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space--a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error mettic to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The focus of this paper is on the out-of-core construction of a level-of-detail hierarchy---our framework is general enough to incorporate many different aspects of view-dependent rendering. We therefore emphasize the off-line phases of our method, and report on their theoretical and experimental memory and disk usage and execution time. Our results indicate on average one to two orders of magnitude improvement in processing speed over previous out-of-core methods. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.

  5. Ecosystem simplification, biodiversity loss and plant virus emergence.

    Science.gov (United States)

    Roossinck, Marilyn J; García-Arenal, Fernando

    2015-02-01

    Plant viruses can emerge into crops from wild plant hosts, or conversely from domestic (crop) plants into wild hosts. Changes in ecosystems, including loss of biodiversity and increases in managed croplands, can impact the emergence of plant virus disease. Although data are limited, in general the loss of biodiversity is thought to contribute to disease emergence. More in-depth studies have been done for human viruses, but studies with plant viruses suggest similar patterns, and indicate that simplification of ecosystems through increased human management may increase the emergence of viral diseases in crops.

  6. Green-Ampt模型参数简化及与土壤物理参数的关系%Parameters simplification of Green-Ampt infiltration models and relationships between infiltration and soil physical parameters

    Institute of Scientific and Technical Information of China (English)

    刘姗姗; 白美健; 许迪; 李益农; 胡卫东

    2012-01-01

    简化模型表达形式从而减少参数个数,对于Green-Ampt入渗模型的实际应用具有重要的现实意义.该文通过推导湿润锋处平均基质吸力与Philip模型中土壤吸湿率关系基础上提出了简化的Green-Ampt入渗模型,基于新疆222兵团两块壤质土壤田块上土壤水分入渗试验资料,分析了Green-Ampt简化入渗模型参数与土壤物理参数之间的关系,建立了模型参数与土壤物理参数之间的定量经验转换函数.结果表明,入渗参数A(组合参数)与土壤初始含水率呈对数负相关,相关系数为0.77,A与土壤紧实度和黏粒含量均呈指数负相关,相关系数分别为0.70和0.74.饱和导水率Ks与土壤紧实度和黏粒呈指数负相关,相关系数分别为0.74和0.73.A和Ks与土壤初始含水率、土壤紧实度和黏粒含量呈高度和中度多元线性相关,相关系数分别为0.9和0.79.研究表明Green-Ampt简化入渗模型能够在一定精度下分析土壤入渗过程.%Simplifying the Green-Ampt infiltration model type and reducing its number of parameters have important significance for the practical application of the model. Based on the derivation of the relationship between the average matrix potential suction of the wetting front and the soil sorptivity of Philip model, the simplified Green-Ampt infiltration model was proposed. Using the field observed data obtained from two loam soil fields of 222 corps in Xinjiang province, the relationships between parameters of simplified Green-Ampt model and soil physical parameters were analyzed and then the quantitative experience conversion function was constructed. Results showed that infiltration parameter A was logarithm negative correlated with initial water content, and the correlation coefficient was 0.77. A was exponential negative correlated with soil compaction and clay content, and the coefficient was 0.70 and 0.74 respectively. Saturated hydraulic conductivity Ks was exponential negative

  7. Model Mapping Approach Based on Ontology Semantics

    Directory of Open Access Journals (Sweden)

    Jinkui Hou

    2013-09-01

    Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development

  8. A novel approach to the dynamics of Szekeres dust models

    CERN Document Server

    Sussman, Roberto A

    2011-01-01

    We obtain an elegant and useful description of the dynamics of Szekeres dust models (in their full generality) by means of "quasi--local" scalar variables constructed by suitable integral distributions that can be interpreted as weighed proper volume averages of the local covariant scalars. In terms of these variables, the field equations and basic physical and geometric quantities are formally identical to their corresponding expressions in the spherically symmetric LTB dust models. Since we can map every Szekeres model to a unique LTB model, rigorous results valid for the latter models can be readily generalized to a non--spherical Szekeres geometry. The new variables lead naturally to an initial value formulation in which all scalars are expressed as scaling laws in terms of their values at an arbitrary initial space slice. These variables also yield a significant simplification of numerical work, since the fluid flow evolution equations become a system of autonomous ordinary differential equations subject...

  9. Dynamics Modeling and Simplification of a 6-UPS Parallel Multi-dimensional Loading Device%6-UPS并联多维力加载装置的动力学建模及简化

    Institute of Scientific and Technical Information of China (English)

    刘少欣; 王丹; 陈五一

    2014-01-01

    By using Kane Equation,the dynamic characteristic of the multi-dimensional loading device was analyzed,and dy-namic mathematical model was established. Aimed at the different parameters of motion about the loading device,the effect of the grav-ity,inertial force and the Coriolis force in the dynamic model to the generalized force of system output was analyzed with simulation, under conditions of different acceleration and velocity. The results show that when the velocity or acceleration of the moving platform is within the working limits,the effect of the inertia force branch and the Coriolis force to the system is smaller than 2%,can be omitted, while the gravity of that is greater than 2%,can not. A simplified system dynamic model is gotten according to the analyzed results, which provides the theoretical basis for the control system of parallel device.%使用Kane方法对6-UPS并联机构多维力加载装置进行了动力学特性分析,并建立了动力学数学模型。针对该加载装置不同的运动参数,仿真分析了在不同速度及加速度条件下,动力学模型中惯性力、哥氏力和重力项对系统输出广义力的影响。结果表明:当动平台的速度或加速度在工作范围内,支链惯性力和哥氏力对系统影响小于2%,可以忽略,而重力影响大于2%,不可忽略。依据这一分析结果对系统动力学模型进行了简化,为并联机构的控制系统提供了理论依据。

  10. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  11. 换流变压器噪声预测模型及其简化研究%Studies on noise prediction model and simplification for current convert transformers

    Institute of Scientific and Technical Information of China (English)

    阮学云; 李志远; 魏浩征; 黄莹

    2011-01-01

    换流变压器作为高压直流换流站主要噪声源,其噪声预测精度和控制方法的选择将直接影响换流站整体噪声预测水平及治理效果.通过对换流变压器噪声的产生机理、噪声频谱及常见治理方案等方面进行系统研究,重点推出了换流变压器噪声控制方法BOX-IN技术,并就BOX-IN装置的噪声预测模型进行了简化和对比验证.通过对降噪量进行现场测试,结果表明,BOX-IN装置降噪量达到20 dB(A)左右,与理论计算值近似,为进一步提高高压直流换流站噪声预测精度提供了理论依据.%As a main noise source of High-Voltage Direct Current (HVDC), converter transformer's noise prediction precision and the choice of control methods will directly influence the level of noise prediction and management effect for converter station. Through the systematic studies on the noise-generation mechanism, frequency spectrum, noise consol measurements and so on, the paper highlighted the BOX-IN technology, simplifies and validates noise prediction model for BOX-IN equipment. According to the field test, the result indicates that the noise reduction quantities of BOX-IN equipment achieves about 20dB(A) corresponding to the theoretical calculations, which provides a theoretical basis for improving the noise prediction precision of HVDC furtherly.

  12. A New Algorithm for Cartographic Simplification of Streams and Lakes Using Deviation Angles and Error Bands

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-10-01

    Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.

  13. Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes

    Institute of Scientific and Technical Information of China (English)

    Muhammad Hussain

    2009-01-01

    Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.

  14. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...

  15. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical

  16. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio

  17. Model based feature fusion approach

    NARCIS (Netherlands)

    Schwering, P.B.W.

    2001-01-01

    In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si

  18. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  19. A POMDP approach to Affective Dialogue Modeling

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.

    2007-01-01

    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's

  20. The chronic diseases modelling approach

    NARCIS (Netherlands)

    Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM

    1998-01-01

    A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s

  1. SIMPLIFICATION IN CHILD LANGUAGE IN BAHASA INDONESIA: A CASE STUDY ON FILIP

    Directory of Open Access Journals (Sweden)

    Julia Eka Rini

    2000-01-01

    Full Text Available This article aims at giving examples of characteristics of simplification in Bahasa Indonesia and proving that child language has a pattern and that there is a process in learning. Since this is a case study, it might not be enough to say that simplification is universal for all children of any mother tongues, but at least there is a proof that such patterns of simplification also occur in Bahasa Indonesia.

  2. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...

  3. Ecosystem models are by definition simplifications of the real ...

    African Journals Online (AJOL)

    spamer

    have few generations, but they appear at opportunistic times in the plankton. In contrast .... Barents Sea data show that biomass starts to increase at the latest around mid ..... (1998) attributed delayed diatom blooms in the St Lawrence Estuary.

  4. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  5. Iterative ramp sharpening for structure signature-preserving simplification of images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo A [Los Alamos National Laboratory; Soille, Pierre [EC-JRC

    2010-01-01

    In this paper, we present a simple and heuristic ramp sharpening algorithm that achieves local contrast enhancement of vector-valued images. The proposed algorithm performs a local comparison of intensity values as well as gradient strength and directional information derived from the gradient structure tensor so that the sharpening is applied only for pixels found on the ramps around true edges. This way, the contrast between objects and regions separated by a ramp is enhanced correspondingly, avoiding ringing artefacts. It is found that applying this technique in an iterative manner on blurred imagery produces sharpening preserving both structure and signature of the image. The final approach reaches a good compromise between complexity and effectiveness for image simplification, enhancing in an efficient manner the image details and maintaining the overall image appearance.

  6. A simplification of the fractional Hartley transform applied to image security system in phase

    Science.gov (United States)

    Jimenez, Carlos J.; Vilardy, Juan M.; Perez, Ronal

    2017-01-01

    In this work we develop a new encryption system for encoded image in phase using the fractional Hartley transform (FrHT), truncation operations and random phase masks (RPMs). We introduce a simplification of the FrHT with the purpose of computing this transform in an efficient and fast way. The security of the encryption system is increased by using nonlinear operations, such as the phase encoding and the truncation operations. The image to encrypt (original image) is encoded in phase and the truncation operations applied in the encryption-decryption system are the amplitude and phase truncations. The encrypted image is protected by six keys, which are the two fractional orders of the FrHTs, the two RPMs and the two pseudorandom code images generated by the amplitude and phase truncation operations. All these keys have to be correct for a proper recovery of the original image in the decryption system. We present digital results that confirm our approach.

  7. Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text

    CERN Document Server

    Jonnalagadda, Siddhartha; Hakenberg, Jorg; Baral, Chitta; Gonzalez, Graciela

    2010-01-01

    The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.

  8. Szekeres models: a covariant approach

    CERN Document Server

    Apostolopoulos, Pantelis S

    2016-01-01

    We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  9. Matrix Model Approach to Cosmology

    CERN Document Server

    Chaney, A; Stern, A

    2015-01-01

    We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...

  10. A new approach to adaptive data models

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2016-12-01

    Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.

  11. The femur as a musculo-skeletal construct: a free boundary condition modelling approach.

    Science.gov (United States)

    Phillips, A T M

    2009-07-01

    Previous finite element studies of the femur have made simplifications to varying extents with regard to the boundary conditions used during analysis. Fixed boundary conditions are generally applied to the distal femur when examining the proximal behaviour at the hip joint, while the same can be said for the proximal femur when examining the distal behaviour at the knee joint. While fixed boundary condition analyses have been validated against in vitro experiments it remains a matter of debate as to whether the numerical and experimental models are indicative of the in vivo situation. This study presents a finite element model in which the femur is treated as a complete musculo-skeletal construct, spanning between the hip and knee joints. Linear and non-linear implementations of a free boundary condition modelling approach are applied to the bone through the explicit inclusion of muscles and ligaments spanning both the hip joint and the knee joint. A non-linear force regulated, muscle strain based activation strategy was found to result in lower observed principal strains in the cortex of the femur, compared to a linear activation strategy. The non-linear implementation of the model in particular, was found to produce hip and knee joint reaction forces consistent with in vivo data from instrumented implants.

  12. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  13. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  14. Model Oriented Approach for Industrial Software Development

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2015-01-01

    Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.

  15. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  16. THE METHOD OF GRAPHIC SIMPLIFICATION OF AREA FEATURE BOUNDARY WITH RIGHT ANGLES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Some rules of simplification of area feature boundary and the method of acquiring spatial knowledge,such as maintaining area and shape of area feature, are discussed.This paper focuses on the progressive method of graphic simplification of area feature boundary with right angles based on its characteristics.

  17. A Non-linear Eulerian Approach for Assessment of Health-cost Externalities of Air Pollution

    DEFF Research Database (Denmark)

    Andersen, Mikael Skou; Frohn, Lise Marie; Nielsen, Jytte Seested

    Integrated assessment models which are used in Europe to account for the external costs of air pollution as a support for policy-making and cost-benefit analysis have in order to cope with complexity resorted to simplifications of the non-linear dynamics of atmospheric sciences. In this paper we...... explore the possible significance of such simplifications by reviewing the improvements that result from applying a state-of-the-art atmospheric model for regional transport and non-linear chemical transformations of air pollutants to the impact-pathway approach of the ExternE-method. The more rigorous...

  18. Simplification of the CBS-QB3 method for predicting gas-phase deprotonation free energies

    Science.gov (United States)

    Casasnovas, Rodrigo; Frau, Juan; Ortega-Castro, Joaquín; Salvà, Antoni; Donoso, Josefa; Muñoz, Francisco

    Simplified versions of CBS-QB3 model chemistry were used to calculate the free energies of 36 deprotonation reactions in the gas phase. The best such version, S9, excluded coupled cluster calculation [CCSD(T)], and empirical (ΔEemp) and spin-orbit (ΔEint) correction terms. The mean absolute deviation and root mean square thus obtained (viz. 1.24 and 1.56 kcal/mol, respectively) were very-close to those provided by the original CBS-QB3 method (1.19 and 1.52 kcal/mol, respectively). The high-accuracy of the proposed simplification and its computational expeditiousness make it an excellent choice for energy calculations on gas-phase deprotonation reactions in complex systems.

  19. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...

  20. Modeling diffuse pollution with a distributed approach.

    Science.gov (United States)

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  1. MODULAR APPROACH WITH ROUGH DECISION MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-09-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  2. Modular Approach with Rough Decision Models

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-10-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  3. Modeling approach suitable for energy system

    Energy Technology Data Exchange (ETDEWEB)

    Goetschel, D. V.

    1979-01-01

    Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.

  4. Effects on Text Simplification: Evaluation of Splitting Up Noun Phrases.

    Science.gov (United States)

    Leroy, Gondy; Kauchak, David; Hogue, Alan

    2016-01-01

    To help increase health literacy, we are developing a text simplification tool that creates more accessible patient education materials. Tool development is guided by a data-driven feature analysis comparing simple and difficult text. In the present study, we focus on the common advice to split long noun phrases. Our previous corpus analysis showed that easier texts contained shorter noun phrases. Subsequently, we conducted a user study to measure the difficulty of sentences containing noun phrases of different lengths (2-gram, 3-gram, and 4-gram); noun phrases of different conditions (split or not); and, to simulate unknown terms, pseudowords (present or not). We gathered 35 evaluations for 30 sentences in each condition (3 × 2 × 2 conditions) on Amazon's Mechanical Turk (N = 12,600). We conducted a 3-way analysis of variance for perceived and actual difficulty. Splitting noun phrases had a positive effect on perceived difficulty but a negative effect on actual difficulty. The presence of pseudowords increased perceived and actual difficulty. Without pseudowords, longer noun phrases led to increased perceived and actual difficulty. A follow-up study using the phrases (N = 1,350) showed that measuring awkwardness may indicate when to split noun phrases. We conclude that splitting noun phrases benefits perceived difficulty but hurts actual difficulty when the phrasing becomes less natural.

  5. A thermodynamic approach to model the caloric properties of semicrystalline polymers

    Science.gov (United States)

    Lion, Alexander; Johlitz, Michael

    2016-05-01

    It is well known that the crystallisation and melting behaviour of semicrystalline polymers depends in a pronounced manner on the temperature history. If the polymer is in the liquid state above the melting point, and the temperature is reduced to a level below the glass transition, the final degree of crystallinity, the amount of the rigid amorphous phase and the configurational state of the mobile amorphous phase strongly depend on the cooling rate. If the temperature is increased afterwards, the extents of cold crystallisation and melting are functions of the heating rate. Since crystalline and amorphous phases exhibit different densities, the specific volume depends also on the temperature history. In this article, a thermodynamically based phenomenological approach is developed which allows for the constitutive representation of these phenomena in the time domain. The degree of crystallinity and the configuration of the amorphous phase are represented by two internal state variables whose evolution equations are formulated under consideration of the second law of thermodynamics. The model for the specific Gibbs free energy takes the chemical potentials of the different phases and the mixture entropy into account. For simplification, it is assumed that the amount of the rigid amorphous phase is proportional to the degree of crystallinity. An essential outcome of the model is an equation in closed form for the equilibrium degree of crystallinity in dependence on pressure and temperature. Numerical simulations demonstrate that the process dependences of crystallisation and melting under consideration of the glass transition are represented.

  6. Stormwater infiltration trenches: a conceptual modelling approach.

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare

    2009-01-01

    In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.

  7. Challenges in structural approaches to cell modeling.

    Science.gov (United States)

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Building Water Models, A Different Approach

    CERN Document Server

    Izadi, Saeed; Onufriev, Alexey V

    2014-01-01

    Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...

  9. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  10. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about...

  11. Modelling Coagulation Systems: A Stochastic Approach

    CERN Document Server

    Ryazanov, V V

    2011-01-01

    A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.

  12. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  13. Towards a Multiscale Approach to Cybersecurity Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.

  14. Post-16 Biology--Some Model Approaches?

    Science.gov (United States)

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  15. Decomposition approach to model smart suspension struts

    Science.gov (United States)

    Song, Xubin

    2008-10-01

    Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.

  16. Novel approach to analytical modelling of steady-state heat transfer from the exterior of TEFC induction motors

    Directory of Open Access Journals (Sweden)

    Klimenta Dardan O.

    2017-01-01

    Full Text Available The purpose of this paper is to propose a novel approach to analytical modelling of steady-state heat transfer from the exterior of totally enclosed fan-cooled induction motors. The proposed approach is based on the geometry simplification methods, energy balance equation, modified correlations for forced convection, the Stefan-Boltzmann law, air-flow velocity profiles, and turbulence factor models. To apply modified correlations for forced convection, the motor exterior is presented with surfaces of elementary 3-D shapes as well as the air-flow velocity profiles and turbulence factor models are introduced. The existing correlations for forced convection from a short horizontal cylinder and correlations for heat transfer from straight fins (as well as inter-fin surfaces in axial air-flows are modified by introducing the Prandtl number to the appropriate power. The correlations for forced convection from straight fins and inter-fin surfaces are derived from the existing ones for combined heat transfer (due to forced convection and radiation by using the forced-convection correlations for a single flat plate. Employing the proposed analytical approach, satisfactory agreement is obtained with experimental data from other studies.

  17. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  18. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  19. A Bayesian Shrinkage Approach for AMMI Models.

    Science.gov (United States)

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  20. A Bayesian Shrinkage Approach for AMMI Models.

    Directory of Open Access Journals (Sweden)

    Carlos Pereira da Silva

    Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct

  1. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz

    2015-08-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  2. Scientific Theories, Models and the Semantic Approach

    Directory of Open Access Journals (Sweden)

    Décio Krause

    2007-12-01

    Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.

  3. Topology simplification: Important biological phenomenon or evolutionary relic?. Comment on "Disentangling DNA molecules" by Alexander Vologodskii

    Science.gov (United States)

    Bates, Andrew D.; Maxwell, Anthony

    2016-09-01

    The review, Disentangling DNA molecules[1], gives an excellent technical description of the phenomenon of topology simplification (TS) by type IIA DNA topoisomerases (topos). In the 20 years since its discovery [2], this effect has attracted a good deal of attention, probably because of its apparently magical nature, and because it seemed to offer a solution to the conundrum that all type II topos rely on ATP hydrolysis, but only bacterial DNA gyrases were known to transduce the free energy of hydrolysis into torsion (supercoiling) in the DNA. It made good sense to think that the other enzymes are using the energy to reduce the level of supercoiling, knotting, and particularly decatenation (unlinking), below equilibrium, since the key activity of the non-supercoiling topos is the removal of links between daughter chromosomes [3]. As Vologodskii discusses [1], there have been a number of theoretical models developed to explain how the local effect of a type II topo can influence the global level of knotting and catenation in large DNA molecules, and he explains how features of two of the most successful models (bent G segment and hooked juxtapositions) may be combined to explain the magnitude of the effect and overcome a kinetic problem with the hooked juxtaposition model.

  4. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  5. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  6. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...

  7. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  8. Systematic approach to MIS model creation

    Directory of Open Access Journals (Sweden)

    Macura Perica

    2004-01-01

    Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.

  9. Regularization of turbulence - a comprehensive modeling approach

    Science.gov (United States)

    Geurts, B. J.

    2011-12-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.

  10. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  11. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  12. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  13. AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING

    OpenAIRE

    MARCELO KRIEGER

    1983-01-01

    Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...

  14. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  15. Physics-Based Correction of Inhomogeneities in Temperature Series: Model Transferability Testing and Comparison to Statistical Approaches

    Science.gov (United States)

    Auchmann, Renate; Brönnimann, Stefan; Croci-Maspoli, Mischa

    2016-04-01

    For the correction of inhomogeneities in sub-daily temperature series, Auchmann and Brönnimann (2012) developed a physics-based model for one specific type of break, i.e. the transition from a Wild screen to a Stevenson screen at one specific station in Basel, Switzerland. The model is based solely on physical considerations, no relationships of the covariates to the differences between the parallel measurements have been investigated. The physics-based model requires detailed information on the screen geometry, the location, and includes a variety of covariates in the model. The model is mainly based on correcting the radiation error, including a modification by ambient wind. In this study we test the application of the model to another station, Zurich, experiencing the same type of transition. Furthermore we compare the performance of the physics based correction to purely statistical correction approaches (constant correction, correcting for annual cycle using spline). In Zurich the Wild screen was replaced in 1954 by the Stevenson screen, from 1954-1960 parallel temperature measurements in both screens were taken, which will be used to assess the performance of the applied corrections. For Zurich the required model input is available (i.e. three times daily observations of wind, cloud cover, pressure and humidity measurements, local times of sunset and sunrise). However, a large number of stations do not measure these additional input data required for the model, which hampers the transferability and applicability of the model to other stations. Hence, we test possible simplifications and generalizations of the model to make it more easily applicable to stations with the same type of inhomogeneity. In a last step we test whether other types of transitions (e.g., from a Stevenson screen to an automated weather system) can be corrected using the principle of a physics-based approach.

  16. Modeling for fairness: A Rawlsian approach.

    Science.gov (United States)

    Diekmann, Sven; Zwart, Sjoerd D

    2014-06-01

    In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.

  17. Optimization and simplification of the Allergic and Hypersensitivity conditions classification for the ICD-11.

    Science.gov (United States)

    Tanno, L K; Calderon, M A; Demoly, P

    2016-05-01

    Since 2013, an international collaboration of Allergy Academies, including first the World Allergy Organization (WAO), the American Academy of Allergy Asthma and Immunology (AAAAI), and the European Academy of Allergy and Clinical Immunology (EAACI), and then the American College of Allergy, Asthma and Immunology (ACAAI), the Latin American Society of Allergy, Asthma and Immunology (SLAAI), and the Asia Pacific Association of Allergy, Asthma and Clinical Immunology (APAAACI), has spent tremendous efforts to have a better and updated classification of allergic and hypersensitivity conditions in the forthcoming International Classification of Diseases (ICD)-11 version by providing evidences and promoting actions for the need for changes. The latest action was the implementation of a classification proposal of hypersensitivity/allergic diseases built by crowdsourcing the Allergy Academy leaderships. Following bilateral discussions with the representatives of the ICD-11 revision, a face-to-face meeting was held at the United Nations Office in Geneva and a simplification process of the hypersensitivity/allergic disorders classification was carried out to better fit the ICD structure. We are here presenting the end result of what we consider to be a model of good collaboration between the World Health Organization and a specialty. We strongly believe that the outcomes of all past and future actions will impact positively the recognition of the allergy specialty as well as the quality improvement of healthcare system for allergic and hypersensitivity conditions worldwide. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. THE STUDY OF SIMPLIFICATION AND EXPLICITATION TECHNIQUES IN KHALED HOSSEINI'S “A THOUSAND SPLENDID SUNS”

    Directory of Open Access Journals (Sweden)

    Reza Kafipour

    2016-12-01

    Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.

  19. Effect of transport-pathway simplifications on projected releases of radionuclides from a nuclear waste repository (Sweden)

    Science.gov (United States)

    Selroos, Jan-Olof; Painter, Scott L.

    2012-12-01

    The Swedish Nuclear Fuel and Waste Management Company has recently submitted an application for a license to construct a final repository for spent nuclear fuel, at approximately 500 m depth in crystalline bedrock. Migration pathways through the geosphere barrier are geometrically complex, with segments in fractured rock, deformation zones, backfilled tunnels, and near-surface soils. Several simplifications of these complex migration pathways were used in the assessments of repository performance that supported the license application. Specifically, in the geosphere transport calculations, radionuclide transport in soils and tunnels was neglected, and deformation zones were assumed to have transport characteristics of fractured rock. The effects of these simplifications on the projected performance of the geosphere barrier system are addressed. Geosphere performance is shown to be sensitive to how transport characteristics of deformation zones are conceptualized and incorporated into the model. Incorporation of advective groundwater travel time within backfilled tunnels reduces radiological dose from non-sorbing radionuclides such as I-129, while sorption in near-surface soils reduces radiological doses from sorbing radionuclides such as Ra-226. These results help quantify the degree to which geosphere performance was pessimistically assessed, and provide some guidance on how future studies to reduce uncertainty in geosphere performance may be focused.

  20. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  1. Nuclear level density: Shell-model approach

    Science.gov (United States)

    Sen'kov, Roman; Zelevinsky, Vladimir

    2016-06-01

    Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.

  2. Modeling Social Annotation: a Bayesian Approach

    CERN Document Server

    Plangprasopchok, Anon

    2008-01-01

    Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...

  3. Simplification of 3D Graphics for Mobile Devices: Exploring the Trade-off Between Energy Savings and User Perceptions of Visual Quality

    Science.gov (United States)

    Vatjus-Anttila, Jarkko; Koskela, Timo; Lappalainen, Tuomas; Häkkilä, Jonna

    2017-03-01

    3D graphics have quickly become a popular form of media that can also be accessed with today's mobile devices. However, the use of 3D applications with mobile devices is typically a very energy-consuming task due to the processing complexity and the large file size of 3D graphics. As a result, their use may lead to rapid depletion of the limited battery life. In this paper, we investigate how much energy savings can be gained in the transmission and rendering of 3D graphics by simplifying geometry data. In this connection, we also examine users' perceptions on the visual quality of the simplified 3D models. The results of this paper provide new knowledge on the energy savings that can be gained through geometry simplification, as well as on how much the geometry can be simplified before the visual quality of 3D models becomes unacceptable for the mobile users. Based on the results, it can be concluded that geometry simplification can provide significant energy savings for mobile devices without disturbing the users. When geometry simplification is combined with distance based adjustment of detail, up to 52% energy savings were gained in our experiments compared to using only a single high quality 3D model.

  4. Social Modification With The Changing Technology in The Case of Simplification Theory

    Directory of Open Access Journals (Sweden)

    Rana Nur Ülker

    2014-10-01

    Full Text Available Mc. Luhan’s told “The world become global village” came truth. The global village, which main tools are mass media technologies, especially internet, made civilization socials. Withthe rise of the global communication, every new inventions can be known easily and the technology can be observed. As Marcuse said that the global communication not only makes people same but also simple. Tools are being simple which is understood byeverybody from every civilization indeed every age period. In this study, we try to debate this matter, why and how “simplification theory” can make people and technology simple.The simplification theory can be defined as when the media technologies are changed, peoples values and overlook of world are modified. Due to fact that societies’ value and argument about the world translated technologic devices. To sum up, all the instrument of the about human being has changed.Key words: Simplification, Social Modification, Mass Media, Technology, Globalization

  5. Multicomponent Equilibrium Models for Testing Geothermometry Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Carl D. Palmer; Robert W. Smith; Travis L. McLing

    2013-02-01

    Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.

  6. A semiparametric approach to physiological flow models.

    Science.gov (United States)

    Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R

    1989-08-01

    By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.

  7. Evaluating face trustworthiness: a model based approach.

    Science.gov (United States)

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  8. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  9. Methodology in Bi- and Multilingual Studies: From Simplification to Complexity

    Science.gov (United States)

    Aronin, Larissa; Jessner, Ulrike

    2014-01-01

    Research methodology is determined by theoretical approaches. This article discusses methods of multilingualism research in connection with theoretical developments in linguistics, psycholinguistics, sociolinguistics, and education. Taking a brief glance at the past, the article starts with a discussion of an issue underlying the choice of…

  10. Methodology in Bi- and Multilingual Studies: From Simplification to Complexity

    Science.gov (United States)

    Aronin, Larissa; Jessner, Ulrike

    2014-01-01

    Research methodology is determined by theoretical approaches. This article discusses methods of multilingualism research in connection with theoretical developments in linguistics, psycholinguistics, sociolinguistics, and education. Taking a brief glance at the past, the article starts with a discussion of an issue underlying the choice of…

  11. Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.

    Science.gov (United States)

    Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J

    2016-08-01

    To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.

  12. A Bayesian modeling approach for generalized semiparametric structural equation models.

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing

    2013-10-01

    In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.

  13. A Simplification of a Real-Time Verification Problem

    CERN Document Server

    Saha, Indranil; Roy, Suman; 10.1007/978-3-540-75596-8_21

    2010-01-01

    We revisit the problem of real-time verification with dense dynamics using timeout and calendar based models and simplify this to a finite state verification problem. To overcome the complexity of verification of real-time systems with dense dynamics, Dutertre and Sorea, proposed timeout and calender based transition systems to model the behavior of real-time systems and verified safety properties using k-induction in association with bounded model checking. In this work, we introduce a specification formalism for these models in terms of Timeout Transition Diagrams and capture their behavior in terms of semantics of Timed Transition Systems. Further, we discuss a technique, which reduces the problem of verification of qualitative temporal properties on infinite state space of (a large fragment of) these timeout and calender based transition systems into that on clockless finite state models through a two-step process comprising of digitization and canonical finitary reduction. This technique enables us to ve...

  14. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  15. New helical-shape magnetic pole design for Magnetic Lead Screw enabling structure simplification

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Xia, Yongming; Wu, Weimin

    2015-01-01

    Magnetic lead screw (MLS) is a new type of high performance linear actuator that is attractive for many potential applications. The main difficulty of the MLS technology lies in the manufacturing of its complicated helical-shape magnetic poles. Structure simplification is, therefore, quite essent...

  16. Perceptual Recovery from Consonant-Cluster Simplification in Korean Using Language-Specific Phonological Knowledge

    NARCIS (Netherlands)

    Cho, T.; McQueen, J.M.

    2011-01-01

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for

  17. Application of Stochastic Approaches to Modelling Suspension Flow in Porous Media

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Yuan, Hao

    2012-01-01

    briefly discussed. The population balance models growing out of the Boltzmann-Smolukhowski formalism take into account the particle and the pore size distributions. A system of integral-differential kinetic equations for the particle transport is derived and averaged. The continuous-time random walk...... theory considers the distribution of the residence times of particles in pores. The transport equation derived in the framework of CTRW contains a convolution integral with a memory kernel accounting for the particle flight distribution. An important simplification of the CTRW formalism, its reduction...

  18. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  19. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...

  20. PET imaging for receptor occupancy: meditations on calculation and simplification.

    Science.gov (United States)

    Zhang, Yumin; Fox, Gerard B

    2012-03-01

    This invited mini-review briefly summarizes procedures and challenges of measuring receptor occupancy with positron emission tomography. Instead of describing the detailed analytic procedures of in vivo ligand-receptor imaging, the authors provide a pragmatic approach, along with personal perspectives, for conducting positron emission tomography imaging for receptor occupancy, and systematically elucidate the mathematics of receptor occupancy calculations in practical ways that can be understood with elementary algebra. The authors also share insights regarding positron emission tomography imaging for receptor occupancy to facilitate applications for the development of drugs targeting receptors in the central nervous system.

  1. Centella asiatica attenuates Aβ-induced neurodegenerative spine loss and dendritic simplification.

    Science.gov (United States)

    Gray, Nora E; Zweig, Jonathan A; Murchison, Charles; Caruso, Maya; Matthews, Donald G; Kawamoto, Colleen; Harris, Christopher J; Quinn, Joseph F; Soumyanath, Amala

    2017-04-12

    The medicinal plant Centella asiatica has long been used to improve memory and cognitive function. We have previously shown that a water extract from the plant (CAW) is neuroprotective against the deleterious cognitive effects of amyloid-β (Aβ) exposure in a mouse model of Alzheimer's disease, and improves learning and memory in healthy aged mice as well. This study explores the physiological underpinnings of those effects by examining how CAW, as well as chemical compounds found within the extract, modulate synaptic health in Aβ-exposed neurons. Hippocampal neurons from amyloid precursor protein over-expressing Tg2576 mice and their wild-type (WT) littermates were used to investigate the effect of CAW and various compounds found within the extract on Aβ-induced dendritic simplification and synaptic loss. CAW enhanced arborization and spine densities in WT neurons and prevented the diminished outgrowth of dendrites and loss of spines caused by Aβ exposure in Tg2576 neurons. Triterpene compounds present in CAW were found to similarly improve arborization although they did not affect spine density. In contrast caffeoylquinic acid (CQA) compounds from CAW were able to modulate both of these endpoints, although there was specificity as to which CQAs mediated which effect. These data suggest that CAW, and several of the compounds found therein, can improve dendritic arborization and synaptic differentiation in the context of Aβ exposure which may underlie the cognitive improvement observed in response to the extract in vivo. Additionally, since CAW, and its constituent compounds, also improved these endpoints in WT neurons, these results may point to a broader therapeutic utility of the extract beyond Alzheimer's disease.

  2. Karst Aquifer Recharge: A Case History of over Simplification from the Uley South Basin, South Australia

    Directory of Open Access Journals (Sweden)

    Nara Somaratne

    2015-02-01

    Full Text Available The article “Karst aquifer recharge: Comments on ‘Characteristics of Point Recharge in Karst Aquifers’, by Adrian D. Werner, 2014, Water 6, doi:10.3390/w6123727” provides misrepresentation in some parts of Somaratne [1]. The description of Uley South Quaternary Limestone (QL as unconsolidated or poorly consolidated aeolianite sediments with the presence of well-mixed groundwater in Uley South [2] appears unsubstantiated. Examination of 98 lithological descriptions with corresponding drillers’ logs show only two wells containing bands of unconsolidated sediments. In Uley South basin, about 70% of salinity profiles obtained by electrical conductivity (EC logging from monitoring wells show stratification. The central and north central areas of the basin receive leakage from the Tertiary Sand (TS aquifer thereby influencing QL groundwater characteristics, such as chemistry, age and isotope composition. The presence of conduit pathways is evident in salinity profiles taken away from TS water affected areas. Pumping tests derived aquifer parameters show strong heterogeneity, a typical characteristic of karst aquifers. Uley South QL aquifer recharge is derived from three sources; diffuse recharge, point recharge from sinkholes and continuous leakage of TS water. This limits application of recharge estimation methods, such as the conventional chloride mass balance (CMB as the basic premise of the CMB is violated. The conventional CMB is not suitable for accounting chloride mass balance in groundwater systems displaying extreme range of chloride concentrations and complex mixing [3]. Over simplification of karst aquifer systems to suit application of the conventional CMB or 1-D unsaturated modelling as described in Werner [2], is not suitable use of these recharge estimation methods.

  3. Treatment simplification in HIV-infected adults as a strategy to prevent toxicity, improve adherence, quality of life and decrease healthcare costs

    Directory of Open Access Journals (Sweden)

    Vitória M

    2011-07-01

    Full Text Available Jean B Nachega1–3, Michael J Mugavero4, Michele Zeier2, Marco Vitória5, Joel E Gallant3,61Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 2Department of Medicine and Centre for Infectious Diseases (CID, Stellenbosch University, Faculty of Health Sciences, Cape Town, South Africa; 3Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 4Division of Infectious Diseases, Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA; 5HIV Department, World Health Organization, Geneva, Switzerland; 6Department of Medicine, Division of Infectious Diseases, Johns Hopkins University School of Medicine, Baltimore, MD, USAAbstract: Since the advent of highly active antiretroviral therapy (HAART, the treatment of human immunodeficiency virus (HIV infection has become more potent and better tolerated. While the current treatment regimens still have limitations, they are more effective, more convenient, and less toxic than regimens used in the early HAART era, and new agents, formulations and strategies continue to be developed. Simplification of therapy is an option for many patients currently being treated with antiretroviral therapy (ART. The main goals are to reduce pill burden, improve quality of life and enhance medication adherence, while minimizing short- and long-term toxicities, reducing the risk of virologic failure and maximizing cost-effectiveness. ART simplification strategies that are currently used or are under study include the use of once-daily regimens, less toxic drugs, fixed-dose coformulations and induction-maintenance approaches. Improved adherence and persistence have been observed with the adoption of some of these strategies. The role of regimen simplification has implications not only for individual patients, but also for health care policy. With increased interest in ART regimen simplification, it is critical to

  4. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.

  5. Connectivity of channelized reservoirs: a modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)

    2006-07-01

    Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos

  6. Comparing Simplification Strategies for the Skeletal Muscle Proteome

    Directory of Open Access Journals (Sweden)

    Bethany Geary

    2016-03-01

    Full Text Available Skeletal muscle is a complex tissue that is dominated by the presence of a few abundant proteins. This wide dynamic range can mask the presence of lower abundance proteins, which can be a confounding factor in large-scale proteomic experiments. In this study, we have investigated a number of pre-fractionation methods, at both the protein and peptide level, for the characterization of the skeletal muscle proteome. The analyses revealed that the use of OFFGEL isoelectric focusing yielded the largest number of protein identifications (>750 compared to alternative gel-based and protein equalization strategies. Further, OFFGEL led to a substantial enrichment of a different sub-population of the proteome. Filter-aided sample preparation (FASP, coupled to peptide-level OFFGEL provided more confidence in the results due to a substantial increase in the number of peptides assigned to each protein. The findings presented here support the use of a multiplexed approach to proteome characterization of skeletal muscle, which has a recognized imbalance in the dynamic range of its protein complement.

  7. Fierz relations for Volkov spinors and the simplification of Furry picture traces

    CERN Document Server

    Hartin, A

    2016-01-01

    Transition probability calculations of strong field particle processes in the Furry picture, typically use fermion Volkov solutions. These solutions have a relatively complicated spinor due to the interaction of the electron spin with a strong external field, which in turn leads to unwieldy trace calculations. The simplification of these calculations would aid theoretical studies of strong field phenomena such as the predicted resonance behaviour of higher order Furry picture processes. Here, Fierz transformations of Volkov spinors are developed and applied to a 1st order and a 2nd order Furry picture process. Combined with symmetry properties, the techniques presented here are generally applicable and lead to considerable simplification of Furry picture analytic calculations.

  8. Simplification of vacuole structure during plant cell death triggered by culture filtrates of Erwinia carotovora

    Institute of Scientific and Technical Information of China (English)

    Yumi Hirakawa; Toshihisa Nomura; Seiichiro Hasezawa; Takumi Higaki

    2015-01-01

    Vacuoles are suggested to play crucial roles in plant defense-related cel death. During programmed cel death, previous live cel imaging studies have observed vacuoles to become simpler in structure and have implicated this simplification as a prelude to the vacuole’s rupture and consequent lysis of the plasma membrane. Here, we examined dynamics of the vacuole in cel cycle-synchronized tobacco BY-2 (Nicotiana tabacum L. cv. Bright Yel ow 2) cel s during cel death induced by application of culture filtrates of Erwinia carotovora. The filtrate induced death in about 90%of the cel s by 24 h. Prior to cel death, vacuole shape simplified and endoplasmic actin filaments disassembled;however, the vacuoles did not rupture until after plasma membrane integrity was lost. Instead of facilitating rupture, the simplification of vacuole structure might play a role in the retrieval of membrane components needed for defense-related cel death.

  9. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  10. ALREST High Fidelity Modeling Program Approach

    Science.gov (United States)

    2011-05-18

    Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air

  11. Automation and work-simplification in the urinalysis laboratory. A pilgrim's progress.

    Science.gov (United States)

    Patterson, P P

    1988-09-01

    The evolution of the modern clinical laboratory has produced a gap between medical/scientific competence on the one hand and management skills on the other. Physician-managers need both sets of competencies. Concepts of operations management and cost accounting shape criteria for strategic decisions in technology improvement programs. Automation and work-simplification are key strategies for improving cost performance in clinical laboratories.

  12. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    National Research Council Canada - National Science Library

    Eser ÖRDEM

    2013-01-01

    Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...

  13. A model-based multisensor data fusion knowledge management approach

    Science.gov (United States)

    Straub, Jeremy

    2014-06-01

    A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.

  14. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f

  15. Modelling the World Wool Market: A Hybrid Approach

    OpenAIRE

    2007-01-01

    We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...

  16. An algebraic approach to the Hubbard model

    CERN Document Server

    de Leeuw, Marius

    2015-01-01

    We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.

  17. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    MUHAMMAD ZAKA EMAD

    2017-09-01

    Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.

  18. Regularization of turbulence - a comprehensive modeling approach

    NARCIS (Netherlands)

    Geurts, Bernard J.

    2011-01-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl

  19. Measuring equilibrium models: a multivariate approach

    Directory of Open Access Journals (Sweden)

    Nadji RAHMANIA

    2011-04-01

    Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.

  20. A graphical approach to analogue behavioural modelling

    OpenAIRE

    Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto

    2007-01-01

    In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...

  1. A geometrical approach to structural change modeling

    OpenAIRE

    Stijepic, Denis

    2013-01-01

    We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...

  2. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  3. A New Approach for Magneto-Static Hysteresis Behavioral Modeling

    DEFF Research Database (Denmark)

    Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio

    2016-01-01

    In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....

  4. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  5. Nucleon Spin Content in a Relativistic Quark Potential Model Approach

    Institute of Scientific and Technical Information of China (English)

    DONG YuBing; FENG QingGuo

    2002-01-01

    Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.

  6. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  7. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  8. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  9. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  10. Random matrix model approach to chiral symmetry

    CERN Document Server

    Verbaarschot, J J M

    1996-01-01

    We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.

  11. Machine Learning Approaches for Modeling Spammer Behavior

    CERN Document Server

    Islam, Md Saiful; Islam, Md Rafiqul

    2010-01-01

    Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.

  12. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  13. Second Quantization Approach to Stochastic Epidemic Models

    CERN Document Server

    Mondaini, Leonardo

    2015-01-01

    We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.

  14. "Dispersion modeling approaches for near road | Science ...

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal

  15. Flipped models in Trinification: A Comprehensive Approach

    CERN Document Server

    Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo

    2016-01-01

    By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.

  16. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  17. Approaching models of nursing from a postmodernist perspective.

    Science.gov (United States)

    Lister, P

    1991-02-01

    This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.

  18. Manufacturing Excellence Approach to Business Performance Model

    Directory of Open Access Journals (Sweden)

    Jesus Cruz Alvarez

    2015-03-01

    Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.

  19. A Bayesian Model Committee Approach to Forecasting Global Solar Radiation

    CERN Document Server

    Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril

    2012-01-01

    This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.

  20. MDA based-approach for UML Models Complete Comparison

    CERN Document Server

    Chaouni, Samia Benabdellah; Mouline, Salma

    2011-01-01

    If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.

  1. A consortium approach to glass furnace modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Golchert, B.; Petrick, M.

    1999-04-20

    Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.

  2. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  3. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    National Research Council Canada - National Science Library

    Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono

    2017-01-01

    .... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...

  4. Global land model development: time to shift from a plant functional type to a plant functional trait approach

    Science.gov (United States)

    Reich, P. B.; Butler, E. E.

    2015-12-01

    This project will advance global land models by shifting from the current plant functional type approach to one that better utilizes what is known about the importance and variability of plant traits, within a framework of simultaneously improving fundamental physiological relations that are at the core of model carbon cycling algorithms. Existing models represent the global distribution of vegetation types using the Plant Functional Typeconcept. Plant Functional Types are classes of plant species with similar evolutionary and life history withpresumably similar responses to environmental conditions like CO2, water and nutrient availability. Fixedproperties for each Plant Functional Type are specified through a collection of physiological parameters, or traits.These traits, mostly physiological in nature (e.g., leaf nitrogen and longevity) are used in model algorithms to estimate ecosystem properties and/or drive calculated process rates. In most models, 5 to 15 functional types represent terrestrial vegetation; in essence, they assume there are a total of only 5 to 15 different kinds of plants on the entire globe. This assumption of constant plant traits captured within the functional type concept has serious limitations, as a single set of traits does not reflect trait variation observed within and between species and communities. While this simplification was necessary decades past, substantial improvement is now possible. Rather than assigning a small number of constant parameter values to all grid cells in a model, procedures will be developed that predict a frequency distribution of values for any given grid cell. Thus, the mean and variance, and how these change with time, will inform and improve model performance. The trait-based approach will improve land modeling by (1) incorporating patterns and heterogeneity of traits into model parameterization, thus evolving away from a framework that considers large areas of vegetation to have near identical trait

  5. "Dispersion modeling approaches for near road

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...

  6. and Models: A Self-Similar Approach

    Directory of Open Access Journals (Sweden)

    José Antonio Belinchón

    2013-01-01

    equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.

  7. Nonperturbative approach to the modified statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)

    1993-12-01

    The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.

  8. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana

  9. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  10. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  11. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  12. Quantum Machine and SR Approach: a Unified Model

    CERN Document Server

    Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro

    2005-01-01

    The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.

  13. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  14. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod

  15. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re

  16. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  17. A market model for stochastic smile: a conditional density approach

    NARCIS (Netherlands)

    Zilber, A.

    2005-01-01

    The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate

  18. Thermoplasmonics modeling: A Green's function approach

    Science.gov (United States)

    Baffou, Guillaume; Quidant, Romain; Girard, Christian

    2010-10-01

    We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.

  19. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  20. Coupling approaches used in atmospheric entry models

    Science.gov (United States)

    Gritsevich, M. I.

    2012-09-01

    While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry

  1. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  2. Bayesian Approach to Neuro-Rough Models for Modelling HIV

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.

  3. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  4. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Continuous Molecular Fields Approach Applied to Structure-Activity Modeling

    CERN Document Server

    Baskin, Igor I

    2013-01-01

    The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.

  6. Reduction and technical simplification of testing protocol for walking based on repeatability analyses: An Interreg IVa pilot study

    Directory of Open Access Journals (Sweden)

    Nejc Sarabon

    2010-12-01

    Full Text Available The aim of this study was to define the most appropriate gait measurement protocols to be used in our future studies in the Mobility in Ageing project. A group of young healthy volunteers took part in the study. Each subject carried out a 10-metre walking test at five different speeds (preferred, very slow, very fast, slow, and fast. Each walking speed was repeated three times, making a total of 15 trials which were carried out in a random order. Each trial was simultaneously analysed by three observers using three different technical approaches: a stop watch, photo cells and electronic kinematic dress. In analysing the repeatability of the trials, the results showed that of the five self-selected walking speeds, three of them (preferred, very fast, and very slow had a significantly higher repeatability of the average walking velocity, step length and cadence than the other two speeds. Additionally, the data showed that one of the three technical methods for gait assessment has better metric characteristics than the other two. In conclusion, based on repeatability, technical and organizational simplification, this study helped us to successfully define a simple and reliable walking test to be used in the main study of the project.

  7. A forward modeling approach for interpreting impeller flow logs.

    Science.gov (United States)

    Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T

    2010-01-01

    A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.

  8. An Adaptive Approach to Schema Classification for Data Warehouse Modeling

    Institute of Scientific and Technical Information of China (English)

    Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun

    2007-01-01

    Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.

  9. Using shape complexity to guide simplification of geospatial data for use in Location-based Services

    OpenAIRE

    Ying, Fangli; Mooney, Peter; Corcoran, Padraig; Winstanley, Adam C.

    2010-01-01

    A Java-based tool for determining if polygons require simplification before delivery to a mobile device using a Location-based Service (LBS) is described. Visualisation of vector-based spatial data on mobile devices is constrained by: small screen size; small data storage on the device; and potentially poor bandwidth connectivity. Our Java-based tool can download OpenStreetMap (OSM) XML data in real-time and calculate a number of shape complexity measures for each object in the...

  10. NET-SYNTHESIS: a software for synthesis, inference and simplification of signal transduction networks.

    Science.gov (United States)

    Kachalo, Sema; Zhang, Ranran; Sontag, Eduardo; Albert, Réka; DasGupta, Bhaskar

    2008-01-15

    We present a software for combined synthesis, inference and simplification of signal transduction networks. The main idea of our method lies in representing observed indirect causal relationships as network paths and using techniques from combinatorial optimization to find the sparsest graph consistent with all experimental observations. We illustrate the biological usability of our software by applying it to a previously published signal transduction network and by using it to synthesize and simplify a novel network corresponding to activation-induced cell death in large granular lymphocyte leukemia. NET-SYNTHESIS is freely downloadable from http://www.cs.uic.edu/~dasgupta/network-synthesis/

  11. Simplification Resilient LDPC-Coded Sparse-QIM Watermarking for 3D-Meshes

    CERN Document Server

    Vasic, Bata

    2012-01-01

    We propose a blind watermarking scheme for 3-D meshes which combines sparse quantization index modulation (QIM) with deletion correction codes. The QIM operates on the vertices in rough concave regions of the surface thus ensuring impeccability, while the deletion correction code recovers the data hidden in the vertices which is removed by mesh optimization and/or simplification. The proposed scheme offers two orders of magnitude better performance in terms of recovered watermark bit error rate compared to the existing schemes of similar payloads and fidelity constraints.

  12. An Analysis of Simplification Strategies in a Reading Textbook of Japanese as a Foreign Language

    Directory of Open Access Journals (Sweden)

    Kristina HMELJAK SANGAWA

    2016-06-01

    Full Text Available Reading is one of the bases of second language learning, and it can be most effective when the linguistic difficulty of the text matches the reader's level of language proficiency. The present paper reviews previous research on the readability and simplification of Japanese texts, and presents an analysis of a collection of simplified texts for learners of Japanese as a foreign language. The simplified texts are compared to their original versions to uncover different strategies used to make the texts more accessible to learners. The list of strategies thus obtained can serve as useful guidelines for assessing, selecting, and devising texts for learners of Japanese as a foreign language.

  13. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.

  14. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  15. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    , ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...

  16. Hamiltonian approach to hybrid plasma models

    CERN Document Server

    Tronci, Cesare

    2010-01-01

    The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.

  17. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  18. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  19. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  20. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  1. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...

  2. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  3. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  4. 保留边界特征的点云简化算法%Research on point cloud simplification with boundary features reservation

    Institute of Scientific and Technical Information of China (English)

    赵伟玲; 谢雪冬; 程俊廷

    2013-01-01

    为有效简化点云数据,提出保留边界特征的点云简化算法.该算法利用三维栅格划分法建立散乱点云的空间拓扑关系,计算每个数据点的近邻,通过球拟合法求得其曲率和具有方向性的法向量,采用投影点个数比值法找到并保留点云边界,根据具体情况设定所需阈值,对非边界点进行分类,通过对点的曲率与平均曲率比较、近邻保留点与近邻点个数比例,完成点云简化.实验结果表明:该算法不仅能对点云进行直接有效地简化,而且还能很好地保留点云模型的细节特征,简化比例达25% ~40%.该方法可以满足不同种类点云简化的要求,能够提高计算机运行效率.%This paper proposes a simplification method for point cloud with boundary feature reservation for effective simplification of the point cloud. This algorithm consists of firstly using the 3D grid subdivision method to represent the spatial topology relationship of the scattered point cloud and calculate the κ-nearest neighbors for each data point, using the ball-fitting method to simply compute the curvature and the directional normal vector, and then identifying and reserving all the boundary points according to the ratio of the number of projected points, setting the desired thresholds by the specific situations, and classifying the non-boundary points through these thresholds, and finally simplifying the scattered point cloud according to comparative study of curvature and mean curvature of the points and the proportion of reserved points in their κ-nearest neighbors. The algorithm is verified by reducing some typical point cloud cases with various surface features. The experimental results indicate that the algorithm, marked by setting the threshold size according to simplification requirements, allows the direct and effective reduction of point cloud, while preserving detail feature of point cloud model, with a simplification proportion up to 25

  5. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  6. Functional state modelling approach validation for yeast and bacteria cultivations

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  7. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  8. Functional state modelling approach validation for yeast and bacteria cultivations.

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  9. Molecular Modeling Approach to Cardiovascular Disease Targetting

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Akula,

    2010-05-01

    Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.

  10. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  11. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  12. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  13. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  14. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  15. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  16. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  17. Asteroid modeling for testing spacecraft approach and landing.

    Science.gov (United States)

    Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick

    2014-01-01

    Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.

  18. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  19. Heuristic approaches to models and modeling in systems biology

    NARCIS (Netherlands)

    MacLeod, Miles

    2016-01-01

    Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m

  20. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  1. Phonological simplifications, apraxia of speech and the interaction between phonological and phonetic processing.

    Science.gov (United States)

    Galluzzi, Claudia; Bureca, Ivana; Guariglia, Cecilia; Romani, Cristina

    2015-05-01

    Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels - but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.

  2. A New Detection Approach Based on the Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua

    2006-01-01

    The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.

  3. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    Directory of Open Access Journals (Sweden)

    Eser ÖRDEM

    2013-06-01

    Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue

  4. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  5. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  6. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  7. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Science.gov (United States)

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  8. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...

  9. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  10. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  11. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  12. Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics

    Energy Technology Data Exchange (ETDEWEB)

    Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir

    2016-06-07

    In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.

  13. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  14. Asteroid fragmentation approaches for modeling atmospheric energy deposition

    Science.gov (United States)

    Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.

    2017-03-01

    During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.

  15. A Bayesian Approach for Analyzing Longitudinal Structural Equation Models

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum

    2011-01-01

    This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…

  16. An Empirical-Mathematical Modelling Approach to Upper Secondary Physics

    Science.gov (United States)

    Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein

    2008-01-01

    In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…

  17. An Alternative Approach for Nonlinear Latent Variable Models

    Science.gov (United States)

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  18. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  19. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  20. A multilevel approach to modeling of porous bioceramics

    Science.gov (United States)

    Mikushina, Valentina A.; Sidorenko, Yury N.

    2015-10-01

    The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.

  1. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...

  2. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  3. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  4. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  5. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  6. Social learning in Models and Cases - an Interdisciplinary Approach

    Science.gov (United States)

    Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger

    2016-04-01

    Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.

  7. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  8. Building enterprise reuse program--A model-based approach

    Institute of Scientific and Technical Information of China (English)

    梅宏; 杨芙清

    2002-01-01

    Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.

  9. Current approaches to model extracellular electrical neural microstimulation

    Directory of Open Access Journals (Sweden)

    Sébastien eJoucla

    2014-02-01

    Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.

  10. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  11. Application of the Interface Approach in Quantum Ising Models

    OpenAIRE

    Sen, Parongama

    1997-01-01

    We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...

  12. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader

  13. New Approaches in Usable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    2012-01-01

    Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC

  14. Budget impact analysis of the simplification to atazanavir + ritonavir + lamivudine dual therapy of HIV-positive patients receiving atazanavir-based triple therapies in Italy starting from data of the Atlas-M trial

    Science.gov (United States)

    Restelli, Umberto; Fabbiani, Massimiliano; Di Giambenedetto, Simona; Nappi, Carmela; Croce, Davide

    2017-01-01

    Background This analysis aimed at evaluating the impact of a therapeutic strategy of treatment simplification of atazanavir (ATV)+ ritonavir (r) + lamivudine (3TC) in virologically suppressed patients receiving ATV+r+2 nucleoside reverse transcriptase inhibitors (NRTIs) on the budget of the Italian National Health Service (NHS). Methods A budget impact model with a 5-year time horizon was developed based on the clinical data of Atlas-M trial at 48 weeks (in terms of percentage of patients experiencing virologic failure and adverse events), from the Italian NHS perspective. A scenario in which the simplification strategy was not considered was compared with three scenarios in which, among a target population of 1,892 patients, different simplification strategies were taken into consideration in terms of percentage of patients simplified on a yearly basis among those eligible for simplification. The costs considered were direct medical costs related to antiretroviral drugs, adverse events management, and monitoring activities. Results The percentage of patients of the target population receiving ATV+r+3TC varies among the scenarios and is between 18.7% and 46.9% in year 1, increasing up to 56.3% and 84.4% in year 5. The antiretroviral treatment simplification strategy considered would lead to lower costs for the Italian NHS in a 5-year time horizon between −28.7 million € and −16.0 million €, with a reduction of costs between −22.1% (−3.6 million €) and −8.8% (−1.4 million €) in year 1 and up to −39.9% (−6.9 million €) and −26.6% (−4.6 million €) in year 5. Conclusion The therapy simplification for patients receiving ATV+r+2 NRTIs to ATV+r+3TC at a national level would lead to a reduction of direct medical costs over a 5-year period for the Italian NHS. PMID:28280375

  15. THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?

    OpenAIRE

    Ridley-Duff, R.

    2015-01-01

    This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...

  16. A computational language approach to modeling prose recall in schizophrenia.

    Science.gov (United States)

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  17. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  18. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  19. Towards a whole-cell modeling approach for synthetic biology

    Science.gov (United States)

    Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.

    2013-06-01

    Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.

  20. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  1. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  2. An algebraic approach to modeling in software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.

  3. DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL

    Directory of Open Access Journals (Sweden)

    P.Malarvizhi

    2011-12-01

    Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.

  4. Reduced models accounting for parallel magnetic perturbations: gyrofluid and finite Larmor radius-Landau fluid approaches

    Science.gov (United States)

    Tassi, E.; Sulem, P. L.; Passot, T.

    2016-12-01

    Reduced models are derived for a strongly magnetized collisionless plasma at scales which are large relative to the electron thermal gyroradius and in two asymptotic regimes. One corresponds to cold ions and the other to far sub-ion scales. By including the electron pressure dynamics, these models improve the Hall reduced magnetohydrodynamics (MHD) and the kinetic Alfvén wave model of Boldyrev et al. (2013 Astrophys. J., vol. 777, 2013, p. 41), respectively. We show that the two models can be obtained either within the gyrofluid formalism of Brizard (Phys. Fluids, vol. 4, 1992, pp. 1213-1228) or as suitable weakly nonlinear limits of the finite Larmor radius (FLR)-Landau fluid model of Sulem and Passot (J. Plasma Phys., vol 81, 2015, 325810103) which extends anisotropic Hall MHD by retaining low-frequency kinetic effects. It is noticeable that, at the far sub-ion scales, the simplifications originating from the gyroaveraging operators in the gyrofluid formalism and leading to subdominant ion velocity and temperature fluctuations, correspond, at the level of the FLR-Landau fluid, to cancellation between hydrodynamic contributions and ion finite Larmor radius corrections. Energy conservation properties of the models are discussed and an explicit example of a closure relation leading to a model with a Hamiltonian structure is provided.

  5. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Jeremiah D. DENG

    2015-04-01

    Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.

  6. A Spatial Clustering Approach for Stochastic Fracture Network Modelling

    Science.gov (United States)

    Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.

    2014-07-01

    Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach

  7. Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2017-01-01

    Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.

  8. The Research of Simplification Of 1.9 TDI Diesel Engine Heat Release Parameters Determination

    Directory of Open Access Journals (Sweden)

    Justas Žaglinskis

    2014-12-01

    Full Text Available The investigation of modified methodology of Audi 1.9 TDI 1Z diesel engine heat release parameters’ determination is represented in the article. In this research the AVL BOOST BURN and IMPULS software was used to treat data and to simulate engine work process. The reverse task of indicated pressure determination from heat release data was solved here. T. Bulaty and W. Glanzman methodology was modified for purpose to simplify the determination of heat release parameters. The maximal cylinder pressure, which requires additional expensive equipment, was changed into the objective indicator – exhaust gas temperature. This modification allowed to simplify the experimental engine tests and also gave simulation results in an error range up to 2% of main engine operating parameters. The study results are assessed as an important point for the simplification of engine test under field conditions.

  9. Arthroscopic anatomical reconstruction of the lateral ankle ligaments: A technical simplification.

    Science.gov (United States)

    Lopes, R; Decante, C; Geffroy, L; Brulefert, K; Noailles, T

    2016-12-01

    Anatomical reconstruction of the lateral ankle ligaments has become a pivotal component of the treatment strategy for chronic ankle instability. The recently described arthroscopic version of this procedure is indispensable to ensure that concomitant lesions are appropriately managed, yet remains technically demanding. Here, we describe a simplified variant involving percutaneous creation of the calcaneal tunnel for the distal attachment of the calcaneo-fibular ligament. The rationale for this technical stratagem was provided by a preliminary cadaver study that demonstrated a correlation between the lateral malleolus and the distal footprint of the calcaneo-fibular ligament. The main objectives are simplification of the operative technique and decreased injury to tissues whose function is crucial to the recovery of proprioception. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Protein chain pair simplification under the discrete Fréchet distance.

    Science.gov (United States)

    Wylie, Tim; Zhu, Binhai

    2013-01-01

    For protein structure alignment and comparison, a lot of work has been done using RMSD as the distance measure, which has drawbacks under certain circumstances. Thus, the discrete Fréchet distance was recently applied to the problem of protein (backbone) structure alignment and comparison with promising results. For this problem, visualization is also important because protein chain backbones can have as many as 500-600 $(\\alpha)$-carbon atoms, which constitute the vertices in the comparison. Even with an excellent alignment, the similarity of two polygonal chains can be difficult to visualize unless the chains are nearly identical. Thus, the chain pair simplification problem (CPS-3F) was proposed in 2008 to simultaneously simplify both chains with respect to each other under the discrete Fréchet distance. The complexity of CPS-3F is unknown, so heuristic methods have been developed. Here, we define a variation of CPS-3F, called the constrained CPS-3F problem ($({\\rm CPS\\hbox{-}3F}^+)$), and prove that it is polynomially solvable by presenting a dynamic programming solution, which we then prove is a factor-2 approximation for CPS-3F. We then compare the $({\\rm CPS\\hbox{-}3F}^+)$ solutions with previous empirical results, and further demonstrate some of the benefits of the simplified comparisons. Chain pair simplification based on the Hausdorff distance (CPS-2H) is known to be NP-complete, and here we prove that the constrained version ($(\\rm CPS\\hbox{-}2H^+)$) is also NP-complete. Finally, we discuss future work and implications along with a software library implementation, named the Fréchet-based Protein Alignment & Comparison Toolkit (FPACT).

  11. Central nervous system HIV infection in "less-drug regimen" antiretroviral therapy simplification strategies.

    Science.gov (United States)

    Ferretti, Francesca; Gianotti, Nicola; Lazzarin, Adriano; Cinque, Paola

    2014-02-01

    Less-drug regimens (LDR) refer to combinations of either two antiretroviral drugs or ritonavir-boosted protease inhibitor (PI) monotherapy. They may represent a simplification strategy in patients with persistently suppressed human immunodeficiency virus (HIV) viremia, with the main benefits of reducing drug-related toxicities and costs. Systemic virological efficacy of LDR is slightly lower as compared with combined antiretroviral therapy (cART), but patients with failure do not usually develop drug resistance and resuppress HIV replication after reintensification. A major concern of LDR is the lower efficacy in the virus reservoirs, especially in the central nervous system (CNS), where viral compartmentalization and independent evolution of infection may lead to CNS viral escape, often associated with neurologic symptoms. The authors reviewed studies of virological and functional CNS efficacy of LDR, particularly of boosted PI monotherapy regimens, for which more information is available. Symptomatic viral CSF escape was observed mainly in PI/r monotherapy patients with plasma failure and low nadir CD4+ cell counts, and resolved upon reintroduction of triple drug cART, whereas asymptomatic viral failure in CSF was not significantly more frequent in patients on PI/r monotherapy compared with patients on standard cART. In addition, there was no difference in functional outcomes between PI monotherapy and cART patients, irrespective of CSF viral escape. More data are needed on the CNS effect of dual ART regimens and, in general, on long-term efficacy of LDR. Simplification with LDR may be an attractive option in patients with suppressed viral load, if they are well selected and monitored for potential CNS complications.

  12. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  13. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...

  14. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  15. The standard data model approach to patient record transfer.

    Science.gov (United States)

    Canfield, K; Silva, M; Petrucci, K

    1994-01-01

    This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.

  16. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  17. Real-space renormalization group approach to the Anderson model

    Science.gov (United States)

    Campbell, Eamonn

    Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.

  18. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  19. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  20. A new approach of high speed cutting modelling: SPH method

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2006-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...

  1. Schwinger boson approach to the fully screened Kondo model.

    Science.gov (United States)

    Rech, J; Coleman, P; Zarand, G; Parcollet, O

    2006-01-13

    We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.

  2. Kallen Lehman approach to 3D Ising model

    Science.gov (United States)

    Canfora, F.

    2007-03-01

    A “Kallen-Lehman” approach to Ising model, inspired by quantum field theory à la Regge, is proposed. The analogy with the Kallen-Lehman representation leads to a formula for the free-energy of the 3D model with few free parameters which could be matched with the numerical data. The possible application of this scheme to the spin glass case is shortly discussed.

  3. Modelling approaches in sedimentology: Introduction to the thematic issue

    Science.gov (United States)

    Joseph, Philippe; Teles, Vanessa; Weill, Pierre

    2016-09-01

    As an introduction to this thematic issue on "Modelling approaches in sedimentology", this paper gives an overview of the workshop held in Paris on 7 November 2013 during the 14th Congress of the French Association of Sedimentologists. A synthesis of the workshop in terms of concepts, spatial and temporal scales, constraining data, and scientific challenges is first presented, then a discussion on the possibility of coupling different models, the industrial needs, and the new potential domains of research is exposed.

  4. Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach

    DEFF Research Database (Denmark)

    Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper

    2017-01-01

    We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....

  5. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  6. Modeling Water Shortage Management Using an Object-Oriented Approach

    Science.gov (United States)

    Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.

    2007-12-01

    As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model

  7. Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico

    Science.gov (United States)

    Rodriguez, A.

    2017-08-01

    In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.

  8. Comparative flood damage model assessment: towards a European approach

    Directory of Open Access Journals (Sweden)

    B. Jongman

    2012-12-01

    Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  9. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    Science.gov (United States)

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  10. a Study of Urban Stormwater Modeling Approach in Singapore Catchment

    Science.gov (United States)

    Liew, S. C.; Liong, S. Y.; Vu, M. T.

    2011-07-01

    Urbanization has the direct effect of increasing the amount of surface runoff to be discharged through man-made drainage systems. Thus, Singapore's rapid urbanization has drawn great attention on flooding issues. In view of this, proper stormwater modeling approach is necessary for the assessment planning, design, and control of the storm and combines sewerage system. Impacts of urbanization on surface runoff and catchment flooding in Singapore are studied in this paper. In this study, the application of SOBEK-urban 1D is introduced on model catchments and a hypothetical catchment model is created for simulation purpose. Stormwater modeling approach using SOBEK-urban offers a comprehensive modeling tool for simple or extensive urban drainage systems consisting of sewers and open channels despite its size and complexity of the network. The findings from the present study show that stormwater modeling is able to identify flood area and the impact of the anticipated sea level on urban drainage network. Consequently, the performance of the urban drainage system can be improved and early prevention approaches can be carried out.

  11. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  12. A vector relational data modeling approach to Insider threat intelligence

    Science.gov (United States)

    Kelly, Ryan F.; Anderson, Thomas S.

    2016-05-01

    We address the problem of detecting insider threats before they can do harm. In many cases, co-workers notice indications of suspicious activity prior to insider threat attacks. A partial solution to this problem requires an understanding of how information can better traverse the communication network between human intelligence and insider threat analysts. Our approach employs modern mobile communications technology and scale free network architecture to reduce the network distance between human sensors and analysts. In order to solve this problem, we propose a Vector Relational Data Modeling approach to integrate human "sensors," geo-location, and existing visual analytics tools. This integration problem is known to be difficult due to quadratic increases in cost associated with complex integration solutions. A scale free network integration approach using vector relational data modeling is proposed as a method for reducing network distance without increasing cost.

  13. A discrete Lagrangian based direct approach to macroscopic modelling

    Science.gov (United States)

    Sarkar, Saikat; Nowruzpour, Mohsen; Reddy, J. N.; Srinivasa, A. R.

    2017-01-01

    A direct discrete Lagrangian based approach, designed at a length scale of interest, to characterize the response of a body is proposed. The main idea is to understand the dynamics of a deformable body via a Lagrangian corresponding to a coupled interaction of rigid particles in the reduced dimension. We argue that the usual practice of describing the laws of a deformable body in the continuum limit is redundant, because for most of the practical problems, analytical solutions are not available. Since continuum limit is not taken, the framework automatically relaxes the requirement of differentiability of field variables. The discrete Lagrangian based approach is illustrated by deriving an equivalent of the Euler-Bernoulli beam model. A few test examples are solved, which demonstrate that the derived non-local model predicts lower deflections in comparison to classical Euler-Bernoulli beam solutions. We have also included crack propagation in thin structures for isotropic and anisotropic cases using the Lagrangian based approach.

  14. Reconciliation with oneself and with others: From approach to model

    Directory of Open Access Journals (Sweden)

    Nikolić-Ristanović Vesna

    2010-01-01

    Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.

  15. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.

  16. Avoiding simplification strategies by introducing multi-objectiveness in real world problems

    NARCIS (Netherlands)

    Rietveld, C.J.C.; Hendrix, G.P.; Berkers, F.T.H.M.; Croes, N.N.; Smit, S.K.

    2010-01-01

    In business analysis, models are sometimes oversimplified. We pragmatically approach many problems with a single financial objective and include monetary values for non-monetary variables. We enforce constraints which may not be as strict in reality. Based on a case in distributed energy production,

  17. Vibro-acoustics of porous materials - waveguide modeling approach

    DEFF Research Database (Denmark)

    Darula, Radoslav; Sorokin, Sergey V.

    2016-01-01

    The porous material is considered as a compound multi-layered waveguide (i.e. a fluid layer surrounded with elastic layers) with traction free boundary conditions. The attenuation of the vibro-acoustic waves in such a material is assessed. This approach is compared with a conventional Biot's model...... in porous materials....

  18. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  19. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  20. A Behavioral Decision Making Modeling Approach Towards Hedging Services

    NARCIS (Netherlands)

    Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.

    2003-01-01

    This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by fi

  1. A fuzzy approach to the Weighted Overlap Dominance model

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    2013-01-01

    in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...

  2. Methodological Approach for Modeling of Multienzyme in-pot Processes

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan;

    2011-01-01

    This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used...

  3. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use of th...

  4. Pruning Chinese trees : an experimental and modelling approach

    NARCIS (Netherlands)

    Zeng, Bo

    2002-01-01

    Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.

  5. Evaluating Interventions with Multimethod Data: A Structural Equation Modeling Approach

    Science.gov (United States)

    Crayen, Claudia; Geiser, Christian; Scheithauer, Herbert; Eid, Michael

    2011-01-01

    In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait-multimethod-multioccasion-multigroup design can be analyzed with structural equation models. In particular, we show how the…

  6. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  7. A Metacognitive-Motivational Model of Surface Approach to Studying

    Science.gov (United States)

    Spada, Marcantonio M.; Moneta, Giovanni B.

    2012-01-01

    In this study, we put forward and tested a model of how surface approach to studying during examination preparation is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. A sample of 528 university students completed, one week before examinations, the following…

  8. A New Approach for Testing the Rasch Model

    Science.gov (United States)

    Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya

    2011-01-01

    Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…

  9. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  10. The Bipolar Approach: A Model for Interdisciplinary Art History Courses.

    Science.gov (United States)

    Calabrese, John A.

    1993-01-01

    Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)

  11. Non-frontal model based approach to forensic face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie

  12. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    compared to experimental data obtained by digital image correlation and published in the literature. Excellent agreements between experimentally observed and numerically predicted crack patterns at the micro and macro scale indicate the capability of the modelling approach to accurately capture corrosion...

  13. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use...

  14. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces...

  15. HESS Opinions "Topography driven conceptual modelling (FLEX-Topo"

    Directory of Open Access Journals (Sweden)

    H. H. G. Savenije

    2010-07-01

    Full Text Available Heterogeneity and complexity of hydrological processes offer substantial challenges to the hydrological modeller. Some hydrologists try to tackle this problem by introducing more and more detail in their models, or by setting-up more and more complicated models starting from basic principles at the smallest possible level. As we know, this reductionist approach leads to ever higher levels of equifinality and predictive uncertainty. On the other hand, simple, lumped and parsimonious models may be too simple to be realistic or representative of the dominant hydrological processes. In this commentary, a new model approach is proposed that tries to find the middle way between complex distributed and simple lumped modelling approaches. Here we try to find the right level of simplification while avoiding over-simplification. Paraphrasing Einstein, the maxim is: make a model as simple as possible, but not simpler than that. The approach presented is process based, but not physically based in the traditional sense. Instead, it is based on a conceptual representation of the dominant physical processes in certain key elements of the landscape. The essence of the approach is that the model structure is made dependent on a limited number of landscape classes in which the topography is the main driver, but which can include geological, geomorphological or land-use classification. These classes are then represented by lumped conceptual models that act in parallel. The advantage of this approach over a fully distributed conceptualisation is that it retains maximum simplicity while taking into account observable landscape characteristics.

  16. CFD Approaches for Modelling Bubble Entrainment by an Impinging Jet

    Directory of Open Access Journals (Sweden)

    Martin Schmidtke

    2009-01-01

    Full Text Available This contribution presents different approaches for the modeling of gas entrainment under water by a plunging jet. Since the generation of bubbles happens on a scale which is smaller than the bubbles, this process cannot be resolved in meso-scale simulations, which include the full length of the jet and its environment. This is why the gas entrainment has to be modeled in meso-scale simulations. In the frame of a Euler-Euler simulation, the local morphology of the phases has to be considered in the drag model. For example, the gas is a continuous phase above the water level but bubbly below the water level. Various drag models are tested and their influence on the gas void fraction below the water level is discussed. The algebraic interface area density (AIAD model applies a drag coefficient for bubbles and a different drag coefficient for the free surface. If the AIAD model is used for the simulation of impinging jets, the gas entrainment depends on the free parameters included in this model. The calculated gas entrainment can be adapted via these parameters. Therefore, an advanced AIAD approach could be used in future for the implementation of models (e.g., correlations for the gas entrainment.

  17. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workilow models using ~-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS ( Labeled Transition Semantics) semantics of π-calculus. The main advantage of the worktlow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of worktlow models can be checked thlx)ugh weak bisimulation theorem in the π-caleulus, thus facilitating the optimizationof business processes.

  18. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π-calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π-calculus, thus facilitating the optimization of business processes.

  19. Multiphysics modeling using COMSOL a first principles approach

    CERN Document Server

    Pryor, Roger W

    2011-01-01

    Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.

  20. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  1. A simplified GIS approach to modeling global leaf water isoscapes.

    Directory of Open Access Journals (Sweden)

    Jason B West

    Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment

  2. Lithium-ion batteries modeling involving fractional differentiation

    Science.gov (United States)

    Sabatier, Jocelyn; Merveillaut, Mathieu; Francisco, Junior Mbala; Guillemard, Franck; Porcelatto, Denis

    2014-09-01

    With hybrid and electric vehicles development, automobile battery monitoring systems (BMS) have to meet the new requirements. These systems have to give information on state of health, state of charge, available power. To get this information, BMS often implement battery models. Accuracy of the information manipulated by the BMS thus depends on the model accuracy. This paper is within this framework and addresses lithium-ion battery modeling. The proposed fractional model is based on simplifications of an electrochemical model and on resolution of some partial differential equations used in its description. Such an approach permits to get a simple model in which electrochemical variables and parameters still appear.

  3. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  4. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  5. On a Markovian approach for modeling passive solar devices

    Energy Technology Data Exchange (ETDEWEB)

    Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))

    1991-01-01

    Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).

  6. Disturbed state concept as unified constitutive modeling approach

    Directory of Open Access Journals (Sweden)

    Chandrakant S. Desai

    2016-06-01

    Full Text Available A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.

  7. Disturbed state concept as unified constitutive modeling approach

    Institute of Scientific and Technical Information of China (English)

    Chandrakant S. Desai

    2016-01-01

    A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC) is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.

  8. Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization

    Science.gov (United States)

    Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad

    2017-02-01

    Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.

  9. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...

  10. ON SOME APPROACHES TO ECONOMICMATHEMATICAL MODELING OF SMALL BUSINESS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-04-01

    Full Text Available Small business is an important part of modern Russian economy. We give a wide panorama developed by us of possible approaches to the construction of economic-mathematical models that may be useful to describe the dynamics of small businesses, as well as management. As for the description of certain problems of small business can use a variety of types of economic-mathematical and econometric models, we found it useful to consider a fairly wide range of such models, which resulted in quite a short description of the specific models. In this description of the models brought to such a level that an experienced professional in the field of economic-mathematical modeling could, if necessary, to develop their own specific model to the stage of design formulas and numerical results. Particular attention is paid to the use of statistical methods of non-numeric data, the most pressing at the moment. Are considered the problems of economic-mathematical modeling in solving problems of small business marketing. We have accumulated some experience in application of the methodology of economic-mathematical modeling in solving practical problems in small business marketing, in particular in the field of consumer goods and industrial purposes, educational services, as well as in the analysis and modeling of inflation, taxation and others. In marketing models of decision making theory we apply rankings and ratings. Is considered the problem of comparing averages. We present some models of the life cycle of small businesses - flow model projects, model of capture niches, and model of niche selection. We discuss the development of research on economic-mathematical modeling of small businesses

  11. A validated approach for modeling collapse of steel structures

    Science.gov (United States)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  12. GEOSPATIAL MODELLING APPROACH FOR 3D URBAN DENSIFICATION DEVELOPMENTS

    Directory of Open Access Journals (Sweden)

    O. Koziatek

    2016-06-01

    Full Text Available With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D. The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE, and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI’s CityEngine software and the Computer Generated Architecture (CGA language.

  13. Geospatial Modelling Approach for 3d Urban Densification Developments

    Science.gov (United States)

    Koziatek, O.; Dragićević, S.; Li, S.

    2016-06-01

    With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.

  14. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  15. HESS Opinions "Topography driven conceptual modelling (FLEX-Topo"

    Directory of Open Access Journals (Sweden)

    H. H. G. Savenije

    2010-12-01

    Full Text Available Heterogeneity and complexity of hydrological processes offer substantial challenges to the hydrological modeller. Some hydrologists try to tackle this problem by introducing more and more detail in their models, or by setting-up more and more complicated models starting from basic principles at the smallest possible level. As we know, this reductionist approach leads to ever higher levels of equifinality and predictive uncertainty. On the other hand, simple, lumped and parsimonious models may be too simple to be realistic or representative of the dominant hydrological processes. In this commentary, a new approach is proposed that tries to find the middle way between complex distributed and simple lumped modelling approaches. Here we try to find the right level of simplification while avoiding over-simplification. Paraphrasing Einstein, the maxim is: make a model as simple as possible, but not simpler than that. The approach presented is process based, but not physically based in the traditional sense. Instead, it is based on a conceptual representation of the dominant physical processes in certain key elements of the landscape. The essence of the approach is that the model structure is made dependent on a limited number of landscape classes in which the topography is the main driver, but which can include geological, geomorphological or land-use classification. These classes are then represented by lumped conceptual models that act in parallel. The advantage of this approach over a fully distributed conceptualisation is that it retains maximum simplicity while taking into account observable landscape characteristics.

  16. Kinetic equations modelling wealth redistribution: a comparison of approaches.

    Science.gov (United States)

    Düring, Bertram; Matthes, Daniel; Toscani, Giuseppe

    2008-11-01

    Kinetic equations modelling the redistribution of wealth in simple market economies is one of the major topics in the field of econophysics. We present a unifying approach to the qualitative study for a large variety of such models, which is based on a moment analysis in the related homogeneous Boltzmann equation, and on the use of suitable metrics for probability measures. In consequence, we are able to classify the most important feature of the steady wealth distribution, namely the fatness of the Pareto tail, and the dynamical stability of the latter in terms of the model parameters. Our results apply, e.g., to the market model with risky investments [S. Cordier, L. Pareschi, and G. Toscani, J. Stat. Phys. 120, 253 (2005)], and to the model with quenched saving propensities [A. Chatterjee, B. K. Chakrabarti, and S. S. Manna, Physica A 335, 155 (2004)]. Also, we present results from numerical experiments that confirm the theoretical predictions.

  17. A Computationally Efficient State Space Approach to Estimating Multilevel Regression Models and Multilevel Confirmatory Factor Models.

    Science.gov (United States)

    Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai

    2014-01-01

    Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.

  18. Building Energy Modeling: A Data-Driven Approach

    Science.gov (United States)

    Cui, Can

    Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on

  19. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  20. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  1. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  2. A database approach to information retrieval: The remarkable relationship between language models and region models

    CERN Document Server

    Hiemstra, Djoerd

    2010-01-01

    In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.

  3. Approach to Organizational Structure Modelling in Construction Companies

    Directory of Open Access Journals (Sweden)

    Ilin Igor V.

    2016-01-01

    Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.

  4. An integrated modelling approach to estimate urban traffic emissions

    Science.gov (United States)

    Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.

    2013-07-01

    An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.

  5. A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.

    Science.gov (United States)

    Chang, Chia-Wen; Tao, Chin-Wang

    2017-09-01

    This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.

  6. Diagnosing Hybrid Systems: a Bayesian Model Selection Approach

    Science.gov (United States)

    McIlraith, Sheila A.

    2005-01-01

    In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.

  7. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  8. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  9. Package for calculations and simplifications of expressions with Dirac matrices (MatrixExp)

    Science.gov (United States)

    Poghosyan, V. A.

    2005-08-01

    This paper describes a package for calculations of expressions with Dirac matrices. Advantages over existing similar packages are described. MatrixExp package is intended for simplification of complex expressions involving γ-matrices, providing such tools as automatic Feynman parameterization, integration in d-dimensional space, sorting and grouping of results in a given order. Also, in comparison with the existing similar package Tracer, the presented package MatrixExp has more enhanced input possibility. User-available functions of MatrixExp package are described in detail. Also an example of calculation of Feynman diagram for process b→sγg with application of functions of MatrixExp package is presented. Program summaryTitle of program:MatrixExp Catalogue identifier:ADWB Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWB Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Programming language:MATHEMATICA Computer:PC Pentium Operating system:Windows No. of lines in distributed program, including test data, etc.: 1551 No. of bytes in distributed program, including test data, etc.: 16 040 Distribution format:tar.gz RAM:loading the package uses approx. 3 500 000 bytes of RAM. However memory required for calculations depends heavily on the expressions in the view, as the package uses recursive functions, and MATHEMATICA dynamically allocates memory. Package has been tested to work on PC Pentium II 233 MHz with 128 Mb of memory calculating typical diagrams of contemporary calculations. Nature of problem:Feynman diagram calculation, simplification of expressions with γ-matrices Solution method:Analytic transformations, dimensional regularization, Feynman parameterization Restrictions:MatrixExp package works only with single line of expressions (G[l1, …]), in contrast to the Tracer package that works with multiple lines, i.e., the following is possible in Tracer, but not in MatrixExp: G[l1,

  10. Monotherapy with boosted protease inhibitors as antiretroviral treatment simplification strategy in the clinical setting

    Directory of Open Access Journals (Sweden)

    J Santos

    2012-11-01

    Full Text Available Antiretroviral treatment simplification with darunavir/ritonavir or lopinavir/ritonavir monotherapy maintains sustained HIV viremia suppression in clinical trials. However, data about the efficacy of this strategy in routine clinical practice is still limited, and no direct comparison between darunavir/ritonavir and lopinavir/ritonavir has been performed to date. We retrospectively studied all HIV-1-infected subjects who initiated monotherapy with darunavir/ritonavir or lopinavir/ritonavir while having plasma VL<50 c/mL, and had at least 1 subsequent follow-up visit in our clinic. When two consecutive PI-monotherapy regimens were used, each regimen was considered separately. The primary endpoint was the percentage of patients who maintained virological suppression (HIV-1 VL <50 c/mL through follow-up. Virological failure was defined as at least two consecutive HIV-1 VL >50 c/mL. We also evaluated other reasons for treatment discontinuation. Analyses were performed considering all regimens (full dataset analysis either as “on treatment” or as “treatment switch equals failure”. Five hundred and seventy-three PI-monotherapy regimens corresponding to 520 subjects were included, 262 with darunavir/ritonavir and 311 with lopinavir/ritonavir. Medians (IQR follow-up were 50 (26.3–107.6 and 85.6 (36.9–179.1 weeks for subjects on darunavir/ritonavir and lopinavir/ritonavir, respectively (p<0.001. Overall, 67 (11.7% subjects experienced virological failure, 23 (8.7% were on darunavir/ritonavir and 42 (13.5% were on lopinavir/ritonavir (p=0.796. Two hundred and three (77.5% patients on darunavir/ritonavir and 154 (49.5% on lopinavir/ritonavir maintained virological suppression in the “treatment switch equals failure” (p=0.002. Other reasons for treatment discontinuation were gastrointestinal toxicity and dyslipidemia in 7.2% and 5.9% of cases, respectively. Gastrointestinal toxicities and dyslipidemia leading to treatment discontinuation

  11. A Nonhydrostatic Model Based On A New Approach

    Science.gov (United States)

    Janjic, Z. I.

    Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical

  12. Infiltration under snow cover: Modeling approaches and predictive uncertainty

    Science.gov (United States)

    Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2017-03-01

    Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance

  13. Social model: a new approach of the disability theme.

    Science.gov (United States)

    Bampi, Luciana Neves da Silva; Guilhem, Dirce; Alves, Elioenai Dornelles

    2010-01-01

    The experience of disability is part of the daily lives of people who have a disease, lesion or corporal limitation. Disability is still understood as personal bad luck; moreover, from the social and political points of view, the disabled are seen as a minority. The aim of this study is to contribute to the knowledge about the experience of disability. The research presents a new approach on the theme: the social model. This approach appeared as an alternative to the medical model of disability, which sees the lesion as the primary cause of social inequality and of the disadvantages experienced by the disabled, ignoring the role of social structures in their oppression and marginalization. The study permits reflecting on how the difficulties and barriers society imposed on people considered different make disability a reality and portray social injustice and the vulnerability situation lived by excluded groups.

  14. Lattice percolation approach to 3D modeling of tissue aging

    Science.gov (United States)

    Gorshkov, Vyacheslav; Privman, Vladimir; Libert, Sergiy

    2016-11-01

    We describe a 3D percolation-type approach to modeling of the processes of aging and certain other properties of tissues analyzed as systems consisting of interacting cells. Lattice sites are designated as regular (healthy) cells, senescent cells, or vacancies left by dead (apoptotic) cells. The system is then studied dynamically with the ongoing processes including regular cell dividing to fill vacant sites, healthy cells becoming senescent or dying, and senescent cells dying. Statistical-mechanics description can provide patterns of time dependence and snapshots of morphological system properties. The developed theoretical modeling approach is found not only to corroborate recent experimental findings that inhibition of senescence can lead to extended lifespan, but also to confirm that, unlike 2D, in 3D senescent cells can contribute to tissue's connectivity/mechanical stability. The latter effect occurs by senescent cells forming the second infinite cluster in the regime when the regular (healthy) cell's infinite cluster still exists.

  15. Research on teacher education programs: logic model approach.

    Science.gov (United States)

    Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M

    2013-02-01

    Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program.

  16. A Variational Approach to the Modeling of MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jraifi A

    2007-01-01

    Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel . This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by with scalar variable . Minimum distance of received vectors is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.

  17. A relaxation-based approach to damage modeling

    Science.gov (United States)

    Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus

    2017-01-01

    Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.

  18. Coordination-theoretic approach to modelling grid service composition process

    Institute of Scientific and Technical Information of China (English)

    Meng Qian; Zhong Liu; Jing Wang; Li Yao; Weiming Zhang

    2010-01-01

    A grid service composite process is made up of complex coordinative activities.Developing the appropriate model of grid service coordinative activities is an important foundation for the grid service composition.According to the coordination theory,this paper elaborates the process of the grid service composition by using UML 2.0,and proposes an approach to modelling the grid service composition process based on the coordination theory.This approach helps not only to analyze accurately the task activities and relevant dependencies among task activities,but also to facilitate the adaptability of the grid service orchestration to further realize the connectivity,timeliness,appropriateness and expansibility of the grid service composition.

  19. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  20. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  1. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertainties...

  2. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    Directory of Open Access Journals (Sweden)

    S. Mimouni

    2011-01-01

    Full Text Available The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE_CFD with a homogeneous model, of widespread use for engineering studies, implemented in Code_Saturne. The model implemented in NEPTUNE_CFD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay. Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  3. Development of a CAD Model Simplification Framework for Finite Element Analysis

    Science.gov (United States)

    2012-01-01

    improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 v List of Figures 1.1 Napier Sabre 24 cylinder aicraft engine [1...the calculations. These methods are not as widely used within the community. Figure 1.1: Napier Sabre 24 cylinder aicraft engine [1] 2 To circumvent...24]. A popular technique in robotics is learning from demonstration [4]. Inductive decision trees have proven to be a successful tool within the

  4. Relevance Ranking of Video Data using Hidden Markov Model Distances and Polygon Simplification

    Science.gov (United States)

    2001-03-01

    Q&fefz{a#_+e�gt��6�6cl`baQv�%ef)Na#z9%e"`^’#r�i6�_N`l�a#&�rs`le���gt��%Qz�~Nx6&fk ��x6_4v+aQr" rse "&#�4$$0ij +!�.�%Qz�~Nx6&fkNi��a...r �4%e"`la#_�’Pa�cba#�a#c ���6�Ncl`^’u%ef`bg_4r `b_4’Pclx�v+a ef)6a’�&faQ%Qef`lg_Rgt«%rsz9%Q&feBt% rse "Vtg& � %Q&(v¬txN_4’Pef`bg_�tg...a#_+e�gt��6�6cl`baQv�%ef)Na#z9%e"`^’#r�i6�_N`l�a#&�rs`le���gt��%Qz�~Nx6&fk ��x6_4v+aQr" rse "&#�4$$0ij +!�.�%Qz�~Nx6&fkNi��a#&"z9%_0

  5. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  6. A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION

    OpenAIRE

    Ju, H.

    2012-01-01

    Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a co...

  7. A Conditional Approach to Panel Data Models with Common Shocks

    Directory of Open Access Journals (Sweden)

    Giovanni Forchini

    2016-01-01

    Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.

  8. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  9. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  10. A Data Mining Approach to Modelling of Water Supply Assets

    DEFF Research Database (Denmark)

    Babovic, V.; Drecourt, J.; Keijzer, M.

    2002-01-01

    supply assets are mainly situated underground, and therefore not visible and under the influence of various highly unpredictable forces. This paper proposes the use of advanced data mining methods in order to determine the risks of pipe bursts. For example, analysis of the database of already occurred...... with the choice of pipes to be replaced, the outlined approach opens completely new avenues in asset modelling. The condition of an asset such as a water supply network deteriorates with age. With reliable risk models, addressing the evolution of risk with aging asset, it is now possible to plan optimal...

  11. AN APPROACH IN MODELING TWO-DIMENSIONAL PARTIALLY CAVITATING FLOW

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An approach of modeling viscosity, unsteady partially cavitating flows around lifting bodies is presented. By employing an one-fluid Navier-Stokers solver, the algorithm is proved to be able to handle two-dimensional laminar cavitating flows at moderate Reynolds number. Based on the state equation of water-vapor mixture, the constructive relations of densities and pressures are established. To numerically simulate the cavity wall, different pseudo transition of density models are presumed. The finite-volume method is adopted and the algorithm can be extended to three-dimensional cavitating flows.

  12. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....

  13. THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS

    Directory of Open Access Journals (Sweden)

    O. Chernyak

    2016-12-01

    Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.

  14. Laser modeling a numerical approach with algebra and calculus

    CERN Document Server

    Csele, Mark Steven

    2014-01-01

    Offering a fresh take on laser engineering, Laser Modeling: A Numerical Approach with Algebra and Calculus presents algebraic models and traditional calculus-based methods in tandem to make concepts easier to digest and apply in the real world. Each technique is introduced alongside a practical, solved example based on a commercial laser. Assuming some knowledge of the nature of light, emission of radiation, and basic atomic physics, the text:Explains how to formulate an accurate gain threshold equation as well as determine small-signal gainDiscusses gain saturation and introduces a novel pass

  15. Noether symmetry approach in f(R)-tachyon model

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Mubasher, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), H-12, Islamabad (Pakistan); Mahomed, F.M., E-mail: Fazal.Mahomed@wits.ac.za [Centre for Differential Equations, Continuum Mechanics and Applications, School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050 (South Africa); Momeni, D., E-mail: d.momeni@yahoo.com [Department of Physics, Faculty of Sciences, Tarbiat Moa' llem University, Tehran (Iran, Islamic Republic of)

    2011-08-26

    In this Letter by utilizing the Noether symmetry approach in cosmology, we attempt to find the tachyon potential via the application of this kind of symmetry to a flat Friedmann-Robertson-Walker (FRW) metric. We reduce the system of equations to simpler ones and obtain the general class of the tachyon's potential function and f(R) functions. We have found that the Noether symmetric model results in a power law f(R) and an inverse fourth power potential for the tachyonic field. Further we investigate numerically the cosmological evolution of our model and show explicitly the behavior of the equation of state crossing the cosmological constant boundary.

  16. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  17. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  18. Injury prevention risk communication: A mental models approach

    DEFF Research Database (Denmark)

    Austin, Laurel Cecelia; Fischhoff, Baruch

    2012-01-01

    Individuals' decisions and behaviour can play a critical role in determining both the probability and severity of injury. Behavioural decision research studies peoples' decision-making processes in terms comparable to scientific models of optimal choices, providing a basis for focusing...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...... and create an expert model of the risk situation, interviewing lay people to elicit their comparable mental models, and developing and evaluating communication interventions designed to close the gaps between lay people and experts. This paper reviews the theory and method behind this research stream...

  19. An Integrated Approach to Flexible Modelling and Animated Simulation

    Institute of Scientific and Technical Information of China (English)

    Li Shuliang; Wu Zhenye

    1994-01-01

    Based on the software support of SIMAN/CINEMA, this paper presents an integrated approach to flexible modelling and simulation with animation. The methodology provides a structured way of integrating mathematical and logical model, statistical experinentation, and statistical analysis with computer animation. Within this methodology, an animated simulation study is separated into six different activities: simulation objectives identification , system model development, simulation experiment specification, animation layout construction, real-time simulation and animation run, and output data analysis. These six activities are objectives driven, relatively independent, and integrate through software organization and simulation files. The key ideas behind this methodology are objectives orientation, modelling flexibility,simulation and animation integration, and application tailorability. Though the methodology is closely related to SIMAN/CINEMA, it can be extended to other software environments.

  20. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2012-12-01

    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  1. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  2. A Model Independent Approach to (p)Reheating

    CERN Document Server

    Özsoy, Ogan; Sinha, Kuver; Watson, Scott

    2015-01-01

    In this note we propose a model independent framework for inflationary (p)reheating. Our approach is analogous to the Effective Field Theory of Inflation, however here the inflaton oscillations provide an additional source of (discrete) symmetry breaking. Using the Goldstone field that non-linearly realizes time diffeormorphism invariance we construct a model independent action for both the inflaton and reheating sectors. Utilizing the hierarchy of scales present during the reheating process we are able to recover known results in the literature in a simpler fashion, including the presence of oscillations in the primordial power spectrum. We also construct a class of models where the shift symmetry of the inflaton is preserved during reheating, which helps alleviate past criticisms of (p)reheating in models of Natural Inflation. Extensions of our framework suggest the possibility of analytically investigating non-linear effects (such as rescattering and back-reaction) during thermalization without resorting t...

  3. A model-based approach to human identification using ECG

    Science.gov (United States)

    Homer, Mark; Irvine, John M.; Wendelken, Suzanne

    2009-05-01

    Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Current techniques for identification using ECG rely on empirical methods for extracting features from the ECG signal. This paper presents an alternative approach based on a time-domain model of the ECG trace. Because Auto-Regressive Integrated Moving Average (ARIMA) models form a rich class of descriptors for representing the structure of periodic time series data, they are well-suited to characterizing the ECG signal. We present a method for modeling the ECG, extracting features from the model representation, and identifying individuals using these features.

  4. Computer Modeling of Violent Intent: A Content Analysis Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  5. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Directory of Open Access Journals (Sweden)

    Matthew J. Daigle

    2011-01-01

    Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  6. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Science.gov (United States)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  7. Systems pharmacology modeling: an approach to improving drug safety.

    Science.gov (United States)

    Bai, Jane P F; Fontana, Robert J; Price, Nathan D; Sangar, Vineet

    2014-01-01

    Advances in systems biology in conjunction with the expansion in knowledge of drug effects and diseases present an unprecedented opportunity to extend traditional pharmacokinetic and pharmacodynamic modeling/analysis to conduct systems pharmacology modeling. Many drugs that cause liver injury and myopathies have been studied extensively. Mitochondrion-centric systems pharmacology modeling is important since drug toxicity across a large number of pharmacological classes converges to mitochondrial injury and death. Approaches to systems pharmacology modeling of drug effects need to consider drug exposure, organelle and cellular phenotypes across all key cell types of human organs, organ-specific clinical biomarkers/phenotypes, gene-drug interaction and immune responses. Systems modeling approaches, that leverage the knowledge base constructed from curating a selected list of drugs across a wide range of pharmacological classes, will provide a critically needed blueprint for making informed decisions to reduce the rate of attrition for drugs in development and increase the number of drugs with an acceptable benefit/risk ratio.

  8. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  9. THE MODEL OF EXTERNSHIP ORGANIZATION FOR FUTURE TEACHERS: QUALIMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    Taisiya A. Isaeva

    2015-01-01

    Full Text Available The aim of the paper is to present author’s model for bachelors – future teachers of vocational training. The model is been worked out from the standpoint of qualimetric approach and provides a pedagogical training.Methods. The process is based on the literature analysis of externship organization for students in higher education and includes the SWOT-analysis techniques in pedagogical training. The method of group expert evaluation is the main method of pedagogical qualimetry. Structural components of professional pedagogical competency of students-future teachers are defined. It allows us to determine a development level and criterion of estimation on mastering programme «Vocational training (branch-wise».Results. This article interprets the concept «pedagogical training»; its basic organization principles during students’ practice are stated. The methods of expert group formation are presented: self-assessment and personal data.Scientific novelty. The externship organization model for future teachers is developed. This model is based on pedagogical training, using qualimetric approach and the SWOT-analysis techniques. Proposed criterion-assessment procedures are managed to determine the developing levels of professional and pedagogical competency.Practical significance. The model is introduced into pedagogical training of educational process of Kalashnikov’s Izhevsk State Technical University, and can be used in other similar educational establishments.

  10. Spatiotemporal infectious disease modeling: a BME-SIR approach.

    Science.gov (United States)

    Angulo, Jose; Yu, Hwa-Lung; Langousis, Andrea; Kolovos, Alexander; Wang, Jinfeng; Madrid, Ana Esther; Christakos, George

    2013-01-01

    This paper is concerned with the modeling of infectious disease spread in a composite space-time domain under conditions of uncertainty. We focus on stochastic modeling that accounts for basic mechanisms of disease distribution and multi-sourced in situ uncertainties. Starting from the general formulation of population migration dynamics and the specification of transmission and recovery rates, the model studies the functional formulation of the evolution of the fractions of susceptible-infected-recovered individuals. The suggested approach is capable of: a) modeling population dynamics within and across localities, b) integrating the disease representation (i.e. susceptible-infected-recovered individuals) with observation time series at different geographical locations and other sources of information (e.g. hard and soft data, empirical relationships, secondary information), and c) generating predictions of disease spread and associated parameters in real time, while considering model and observation uncertainties. Key aspects of the proposed approach are illustrated by means of simulations (i.e. synthetic studies), and a real-world application using hand-foot-mouth disease (HFMD) data from China.

  11. Approaching the other: Investigation of a descriptive belief revision model

    Directory of Open Access Journals (Sweden)

    Spyridon Stelios

    2016-12-01

    Full Text Available When an individual—a hearer—is confronted with an opinion expressed by another individual—a speaker—differing from her only in terms of a degree of belief, how will she react? In trying to answer that question this paper reintroduces and investigates a descriptive belief revision model designed to measure approaches. Parameters of the model are the hearer’s credibility account of the speaker, the initial difference between the hearer’s and speaker’s degrees of belief, and the hearer’s resistance to change. Within an interdisciplinary framework, two empirical studies were conducted. A comparison was carried out between empirically recorded revisions and revisions according to the model. Results showed that the theoretical model is highly confirmed. An interesting finding is the measurement of an “unexplainable behaviour” that is not classified either as repulsion or as approach. At a second level of analysis, the model is compared to the Bayesian framework of inference. Structural differences and evidence for optimal descriptive adequacy of the former were highlighted.

  12. Generalized linear models with coarsened covariates: a practical Bayesian approach.

    Science.gov (United States)

    Johnson, Timothy R; Wiest, Michelle M

    2014-06-01

    Coarsened covariates are a common and sometimes unavoidable phenomenon encountered in statistical modeling. Covariates are coarsened when their values or categories have been grouped. This may be done to protect privacy or to simplify data collection or analysis when researchers are not aware of their drawbacks. Analyses with coarsened covariates based on ad hoc methods can compromise the validity of inferences. One valid method for accounting for a coarsened covariate is to use a marginal likelihood derived by summing or integrating over the unknown realizations of the covariate. However, algorithms for estimation based on this approach can be tedious to program and can be computationally expensive. These are significant obstacles to their use in practice. To overcome these limitations, we show that when expressed as a Bayesian probability model, a generalized linear model with a coarsened covariate can be posed as a tractable missing data problem where the missing data are due to censoring. We also show that this model is amenable to widely available general-purpose software for simulation-based inference for Bayesian probability models, providing researchers a very practical approach for dealing with coarsened covariates.

  13. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  14. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  15. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  16. Minor actinide separation: simplification of the DIAMEX-SANEX strategy by means of novel SANEX processes

    Energy Technology Data Exchange (ETDEWEB)

    Geist, A. [Karlsruher Institut fuer Technologie - KIT, INE, P. O. Box 3640, 76021 Karlsruhe (Germany); Modolo, G.; Wilden, A.; Kaufholz, P. [Forschungszentrum Juelich GmbH, IEK-6, Juelich (Germany)

    2013-07-01

    The separation of An(III) from PUREX raffinate has previously been demonstrated by applying a DIAMEX process (i.e., co-extraction of An(III) and Ln(III) from HAR) followed by a SANEX process (i.e., selective extraction of An(III) from the DIAMEX product containing An(III) + Ln(III)). In line with process intensification issues, more compact processes have been developed: Recently, a 1c-SANEX process test was successfully performed, directly extracting An(III) from PUREX HAR. More recently, a new i-SANEX process was successfully tested. This process is based on the co-extraction of An(III) + Ln(III) into a TODGA solvent, followed by a selective back-extraction of An(III) by a water soluble complexing agent, in this case SO{sub 3}-Ph-BTP. In both cases, good recoveries were achieved, and very pure product solutions were obtained. However, both 1c-SANEX and i-SANEX used non-CHON chemicals. Nevertheless, these processes are a simplification to the DIAMEX + SANEX process as only one solvent is used. Finally, the new i-SANEX process is the most compact process. (authors)

  17. Simplification of a haemolytic micromethod for toxic saponin quantification in alfalfa

    Directory of Open Access Journals (Sweden)

    Piotr M. Górski

    2013-12-01

    Full Text Available A simplification of a haemolytic micromethod is presented. In the orginal method, alfalfa (Medicago media Pers. leaf sap is spotted on the plate covered with a blood-gelatine suspension. In the presented method, the mashed alfalfa pulp is used instead of sap. Due to saponin diffusion and the reaction with erythrocytes, a haemolytic ring appears, which has a width proportional to the concentration of toxic saponins. It is shown that the width of haemolytic ring does not depend on the sample weight ranging from 20 to 100 mg. This allows for the omission of laborious sap pressing and sample quantification. Individual alfalfa plants with different saponin contents were tested using leaf sap and leaf pulp for analyses. Good agreement was found with sap and leaf pulp methods. The correlation obtained by both methods was high, r = 0.87. The modified method requires only a small amount of plant material and makes the analyses of large numbers of individual plants per day possible. The method is especially recommended for breeding purposes.

  18. Amphibian skull evolution: the developmental and functional context of simplification, bone loss and heterotopy.

    Science.gov (United States)

    Schoch, Rainer R

    2014-12-01

    Despite their divergent morphology, extant and extinct amphibians share numerous features in the timing and spatial patterning of dermal skull elements. Here, I show how the study of these features leads to a deeper understanding of morphological evolution. Batrachians (salamanders and frogs) have simplified skulls, with dermal bones appearing rudimentary compared with fossil tetrapods, and open cheeks resulting from the absence of other bones. The batrachian skull bones may be derived from those of temnospondyls by truncation of the developmental trajectory. The squamosal, quadratojugal, parietal, prefrontal, parasphenoid, palatine, and pterygoid form rudimentary versions of their homologs in temnospondyls. In addition, failure to ossify and early fusion of bone primordia both result in the absence of further bones that were consistently present in Paleozoic tetrapods. Here, I propose a new hypothesis explaining the observed patterns of bone loss and emargination in a functional context. The starting observation is that jaw-closing muscles are arranged in a different way than in ancestors from the earliest ontogenetic stage onwards, with muscles attaching to the dorsal side of the frontal, parietal, and squamosal. The postparietal and supratemporal start to ossify in a similar way as in branchiosaurids, but are fused to neighboring elements to form continuous attachment areas for the internal adductor. The postfrontal, postorbital, and jugal fail to ossify, as their position is inconsistent with the novel arrangement of adductor muscles. Thus, rearrangement of adductors forms the common theme behind cranial simplification, driven by an evolutionary flattening of the skull in the batrachian stem.

  19. ABOUT COMPLEX APPROACH TO MODELLING OF TECHNOLOGICAL MACHINES FUNCTIONING

    Directory of Open Access Journals (Sweden)

    A. A. Honcharov

    2015-01-01

    Full Text Available Problems arise in the process of designing, production and investigation of a complicated technological machine. These problems concern not only properties of some types of equipment but they have respect to regularities of control object functioning as a whole. A technological machine is thought of as such technological complex where it is possible to lay emphasis on a control system (or controlling device and a controlled object. The paper analyzes a number of existing approaches to construction of models for controlling devices and their functioning. A complex model for a technological machine operation has been proposed in the paper; in other words it means functioning of a controlling device and a controlled object of the technological machine. In this case models of the controlling device and the controlled object of the technological machine can be represented as aggregate combination (elements of these models. The paper describes a conception on realization of a complex model for a technological machine as a model for interaction of units (elements in the controlling device and the controlled object. When a control activation is given to the controlling device of the technological machine its modelling is executed at an algorithmic or logic level and the obtained output signals are interpreted as events and information about them is transferred to executive mechanisms.The proposed scheme of aggregate integration considers element models as object classes and the integration scheme is presented as a combination of object property values (combination of a great many input and output contacts and combination of object interactions (in the form of an integration operator. Spawn of parent object descendants of the technological machine model and creation of their copies in various project parts is one of the most important means of the distributed technological machine modelling that makes it possible to develop complicated models of

  20. 具有3D边缘保持的网面简化算法%Mesh Simplification with 3D Edge Preservation

    Institute of Scientific and Technical Information of China (English)

    刘万春; 贾云得; 朱玉文; 李莉; 李科杰

    2001-01-01

    Mesh simplification of 3D models for computer vision applications should preserve its shape, topology and other attribute values of the object. This calls for a vision system to be effective in the presentation, description, recognition and understanding of the object. A mesh simplification algorithm with features of 3D edge preservation by applying edge operations, e dge-collapse and edge-split to the whole surface mesh is proposed. The maximum asymmetric distance between two meshes is calculated as the shape change measur e. The proposed algorithm is able to reduce a number of faces while preserving t he object's shape, its topology and 3D edge features. The algorithm also distr ibutes the vertices of the mesh evenly over the surface of the object.%研究用于计算机视觉领域的三维物体模型网面简化算法。该算法 可保持物体形状和拓扑关系及物体表面法线、纹理、颜色和边缘等特征,是一种基于边操作 (收缩,分裂)的网面模型的简化算法。该算法将网面不对称最大距离作为形状改变测度,在 大量简化模型数据的同时,能有效地保持模型几何形状、拓扑关系、3D边缘点和边的特 征,并能合理分布网面特点。