WorldWideScience

Sample records for model simplification approach

  1. An Integrated Simplification Approach for 3D Buildings with Sloped and Flat Roofs

    Directory of Open Access Journals (Sweden)

    Jinghan Xie

    2016-07-01

    Full Text Available Simplification of three-dimensional (3D buildings is critical to improve the efficiency of visualizing urban environments while ensuring realistic urban scenes. Moreover, it underpins the construction of multi-scale 3D city models (3DCMs which could be applied to study various urban issues. In this paper, we design a generic yet effective approach for simplifying 3D buildings. Instead of relying on both semantic information and geometric information, our approach is based solely on geometric information as many 3D buildings still do not include semantic information. In addition, it provides an integrated means to treat 3D buildings with either sloped or flat roofs. The two case studies, one exploring simplification of individual 3D buildings at varying levels of complexity while the other, investigating the multi-scale simplification of a cityscape, show the effectiveness of our approach.

  2. Surface Simplification of 3D Animation Models Using Robust Homogeneous Coordinate Transformation

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2014-01-01

    Full Text Available The goal of 3D surface simplification is to reduce the storage cost of 3D models. A 3D animation model typically consists of several 3D models. Therefore, to ensure that animation models are realistic, numerous triangles are often required. However, animation models that have a high storage cost have a substantial computational cost. Hence, surface simplification methods are adopted to reduce the number of triangles and computational cost of 3D models. Quadric error metrics (QEM has recently been identified as one of the most effective methods for simplifying static models. To simplify animation models by using QEM, Mohr and Gleicher summed the QEM of all frames. However, homogeneous coordinate problems cannot be considered completely by using QEM. To resolve this problem, this paper proposes a robust homogeneous coordinate transformation that improves the animation simplification method proposed by Mohr and Gleicher. In this study, the root mean square errors of the proposed method were compared with those of the method proposed by Mohr and Gleicher, and the experimental results indicated that the proposed approach can preserve more contour features than Mohr’s method can at the same simplification ratio.

  3. Hybrid stochastic simplifications for multiscale gene networks

    Directory of Open Access Journals (Sweden)

    Debussche Arnaud

    2009-09-01

    Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  4. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  5. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification

    Directory of Open Access Journals (Sweden)

    Richard J Allen

    2017-03-01

    Full Text Available Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing ‘transfer function’. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate of the pathway as a whole.

  6. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification.

    Science.gov (United States)

    Allen, Richard J; Musante, Cynthia J

    2017-01-01

    Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing 'transfer function'. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate) of the pathway as a whole.

  7. Electric Power Distribution System Model Simplification Using Segment Substitution

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2018-05-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  8. Electric Power Distribution System Model Simplification Using Segment Substitution

    International Nuclear Information System (INIS)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2017-01-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  9. Simplification of one-dimensional hydraulic networks by automated processes evaluated on 1D/2D deterministic flood models

    DEFF Research Database (Denmark)

    Davidsen, Steffen; Löwe, Roland; Thrysøe, Cecilie

    2017-01-01

    Evaluation of pluvial flood risk is often based on computations using 1D/2D urban flood models. However, guidelines on choice of model complexity are missing, especially for one-dimensional (1D) network models. This study presents a new automatic approach for simplification of 1D hydraulic networ...

  10. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  11. SAHM - Simplification of one-dimensional hydraulic networks by automated processes evaluated on 1D/2D deterministic flood models

    DEFF Research Database (Denmark)

    Löwe, Roland; Davidsen, Steffen; Thrysøe, Cecilie

    We present an algorithm for automated simplification of 1D pipe network models. The impact of the simplifications on the flooding simulated by coupled 1D-2D models is evaluated in an Australian case study. Significant reductions of the simulation time of the coupled model are achieved by reducing...... the 1D network model. The simplifications lead to an underestimation of flooded area because interaction points between network and surface are removed and because water is transported downstream faster. These effects can be mitigated by maintaining nodes in flood-prone areas in the simplification...... and by adjusting pipe roughness to increase transport times....

  12. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  13. Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2005-01-01

    We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large

  14. An Agent Based Collaborative Simplification of 3D Mesh Model

    Science.gov (United States)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  15. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  16. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  17. A New Approach to Line Simplification Based on Image Processing: A Case Study of Water Area Boundaries

    Directory of Open Access Journals (Sweden)

    Yilang Shen

    2018-01-01

    Full Text Available Line simplification is an important component of map generalization. In recent years, algorithms for line simplification have been widely researched, and most of them are based on vector data. However, with the increasing development of computer vision, analysing and processing information from unstructured image data is both meaningful and challenging. Therefore, in this paper, we present a new line simplification approach based on image processing (BIP, which is specifically designed for raster data. First, the key corner points on a multi-scale image feature are detected and treated as candidate points. Then, to capture the essence of the shape within a given boundary using the fewest possible segments, the minimum-perimeter polygon (MPP is calculated and the points of the MPP are defined as the approximate feature points. Finally, the points after simplification are selected from the candidate points by comparing the distances between the candidate points and the approximate feature points. An empirical example was used to test the applicability of the proposed method. The results showed that (1 when the key corner points are detected based on a multi-scale image feature, the local features of the line can be extracted and retained and the positional accuracy of the proposed method can be maintained well; and (2 by defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits.

  18. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  19. Infrastructure Area Simplification Plan

    CERN Document Server

    Field, L.

    2011-01-01

    The infrastructure area simplification plan was presented at the 3rd EMI All Hands Meeting in Padova. This plan only affects the information and accounting systems as the other areas are new in EMI and hence do not require simplification.

  20. Homotopic Polygonal Line Simplification

    DEFF Research Database (Denmark)

    Deleuran, Lasse Kosetski

    This thesis presents three contributions to the area of polygonal line simplification, or simply line simplification. A polygonal path, or simply a path is a list of points with line segments between the points. A path can be simplified by morphing it in order to minimize some objective function...

  1. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications

    International Nuclear Information System (INIS)

    Ericsson, Lars O.; Holmen, Johan

    2010-12-01

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  2. Complexity and simplification in understanding recruitment in benthic populations

    KAUST Repository

    Pineda, Jesú s; Reyns, Nathalie B.; Starczak, Victoria R.

    2008-01-01

    reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often

  3. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  4. Influence of the degree of simplification of the two-phase hydrodynamic model on the simulated behaviour dynamics of a steam generator

    International Nuclear Information System (INIS)

    Dupont, J.F.

    1979-03-01

    The principal simplifications of a mathematical model for the simulation of behaviour dynamics of a two-phase flow with heat exchange are examined, as it appears in a steam generator. The theoretical considerations and numerical solutions permit the evaluation of the validity limits and the influence of these simplifications on the results. (G.T.H.)

  5. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  6. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    Science.gov (United States)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  7. Work Simplification

    Science.gov (United States)

    Ross, Lynne

    1970-01-01

    Excerpts from a talk by Mrs. Ross at the 23rd annual convention of the American School Food Service Association in Detroit, August 5, 1969. A book on work simplification by Mrs. Ross will be available in June from the Iowa State University Press, Ames, Iowa. (Editor)

  8. Simplification: A Viewpoint in Outline. Appendix.

    Science.gov (United States)

    Tickoo, Makhan L.

    This essay examines language simplification for second language learners as a linguistic and a pedagogic phenomenon, posing questions for further study by considering past research. It discusses linguistic simplification (LS) in relation to the development of artificial languages, such as Esperanto, "pidgin" languages, Basic English,…

  9. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  10. Organisational simplification and secondary complexity in health services for adults with learning disabilities.

    Science.gov (United States)

    Heyman, Bob; Swain, John; Gillman, Maureen

    2004-01-01

    This paper explores the role of complexity and simplification in the delivery of health care for adults with learning disabilities, drawing upon qualitative data obtained in a study carried out in NE England. It is argued that the requirement to manage complex health needs with limited resources causes service providers to simplify, standardise and routinise care. Simplified service models may work well enough for the majority of clients, but can impede recognition of the needs of those whose characteristics are not congruent with an adopted model. The data were analysed in relation to the core category, identified through thematic analysis, of secondary complexity arising from organisational simplification. Organisational simplification generates secondary complexity when operational routines designed to make health complexity manageable cannot accommodate the needs of non-standard service users. Associated themes, namely the social context of services, power and control, communication skills, expertise and service inclusiveness and evaluation are explored in relation to the core category. The concept of secondary complexity resulting from organisational simplification may partly explain seemingly irrational health service provider behaviour.

  11. Simplifications of rational matrices by using UML

    OpenAIRE

    Tasić, Milan B.; Stanimirović, Ivan P.

    2013-01-01

    The simplification process on rational matrices consists of simplifying each entry represented by a rational function. We follow the classic approach of dividing the numerator and denominator polynomials by their common GCD polynomial, and provide the activity diagram in UML for this process. A rational matrix representation as the quotient of a polynomial matrix and a polynomial is also discussed here and illustrated via activity diagrams. Also, a class diagram giving the links between the c...

  12. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz; Wang, Bei; Chen, Guoning; Rosen, Paul

    2014-01-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification

  13. Simplifications and Idealizations in High School Physics in Mechanics: A Study of Slovenian Curriculum and Textbooks

    Science.gov (United States)

    Forjan, Matej; Sliško, Josip

    2014-01-01

    This article presents the results of an analysis of three Slovenian textbooks for high school physics, from the point of view of simplifications and idealizations in the field of mechanics. In modeling of physical systems, making simplifications and idealizations is important, since one ignores minor effects and focuses on the most important…

  14. Complexity and simplification in understanding recruitment in benthic populations

    KAUST Repository

    Pineda, Jesús

    2008-11-13

    Research of complex systems and problems, entities with many dependencies, is often reductionist. The reductionist approach splits systems or problems into different components, and then addresses these components one by one. This approach has been used in the study of recruitment and population dynamics of marine benthic (bottom-dwelling) species. Another approach examines benthic population dynamics by looking at a small set of processes. This approach is statistical or model-oriented. Simplified approaches identify "macroecological" patterns or attempt to identify and model the essential, "first-order" elements of the system. The complexity of the recruitment and population dynamics problems stems from the number of processes that can potentially influence benthic populations, including (1) larval pool dynamics, (2) larval transport, (3) settlement, and (4) post-settlement biotic and abiotic processes, and larval production. Moreover, these processes are non-linear, some interact, and they may operate on disparate scales. This contribution discusses reductionist and simplified approaches to study benthic recruitment and population dynamics of bottom-dwelling marine invertebrates. We first address complexity in two processes known to influence recruitment, larval transport, and post-settlement survival to reproduction, and discuss the difficulty in understanding recruitment by looking at relevant processes individually and in isolation. We then address the simplified approach, which reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often results in mistaken inferences, and processes and parameters used in some models can bias our view of processes influencing recruitment. We conclude with a discussion on how to reconcile complex and simplified approaches. Although it

  15. Impact of pipes networks simplification on water hammer phenomenon

    Indian Academy of Sciences (India)

    Simplification of water supply networks is an indispensible design step to make the original network easier to be analysed. The impact of networks' simplification on water hammer phenomenon is investigated. This study uses two loops network with different diameters, thicknesses, and roughness coefficients. The network is ...

  16. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz

    2015-08-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  17. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields.

    Science.gov (United States)

    Skraba, Primoz; Bei Wang; Guoning Chen; Rosen, Paul

    2015-08-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  18. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz; Wang, Bei; Chen, Guoning; Rosen, Paul

    2015-01-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  19. Viewpoint-Driven Simplification of Plant and Tree Foliage

    Directory of Open Access Journals (Sweden)

    Cristina Gasch

    2018-03-01

    Full Text Available Plants and trees are an essential part of outdoor scenes. They are represented by such a vast number of polygons that performing real-time visualization is still a problem in spite of the advantages of the hardware. Some methods have appeared to solve this drawback based on point- or image-based rendering. However, geometry representation is required in some interactive applications. This work presents a simplification method that deals with the geometry of the foliage, reducing the number of primitives that represent these objects and making their interactive visualization possible. It is based on an image-based simplification that establishes an order of leaf pruning and reduces the complexity of the canopies of trees and plants. The proposed simplification method is viewpoint-driven and uses the mutual information in order to choose the leaf to prune. Moreover, this simplification method avoids the pruned appearance of the tree that is usually produced when a foliage representation is formed by a reduced number of leaves. The error introduced every time a leaf is pruned is compensated for if the size of the nearest leaf is altered to preserve the leafy appearance of the foliage. Results demonstrate the good quality and time performance of the presented work.

  20. 77 FR 66361 - Reserve Requirements of Depository Institutions: Reserves Simplification

    Science.gov (United States)

    2012-11-05

    ... Requirements of Depository Institutions: Reserves Simplification AGENCY: Board of Governors of the Federal... (Reserve Requirements of Depository Institutions) published in the Federal Register on April 12, 2012. The... simplifications related to the administration of reserve requirements: 1. Create a common two-week maintenance...

  1. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  2. Decomposition and Simplification of Multivariate Data using Pareto Sets.

    Science.gov (United States)

    Huettenberger, Lars; Heine, Christian; Garth, Christoph

    2014-12-01

    Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.

  3. SIMPLIFICATION IN CHILD LANGUAGE IN BAHASA INDONESIA: A CASE STUDY ON FILIP

    Directory of Open Access Journals (Sweden)

    Julia Eka Rini

    2000-01-01

    Full Text Available This article aims at giving examples of characteristics of simplification in Bahasa Indonesia and proving that child language has a pattern and that there is a process in learning. Since this is a case study, it might not be enough to say that simplification is universal for all children of any mother tongues, but at least there is a proof that such patterns of simplification also occur in Bahasa Indonesia.

  4. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact,...

  5. 7 CFR 3015.311 - Simplification, consolidation, or substitution of State plans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Simplification, consolidation, or substitution of... (Continued) OFFICE OF THE CHIEF FINANCIAL OFFICER, DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE... Simplification, consolidation, or substitution of State plans. (a) As used in this section: (1) Simplify means...

  6. WORK SIMPLIFICATION FOR PRODUCTIVITY IMPROVEMENT A ...

    African Journals Online (AJOL)

    Mechanical Engineering Department. Addis Ababa University ... press concerning the work simplification techniques state ... encompassing as it does improved labor-management cooperation ... achievement of business aims or a contribution to attaining ..... recommended work methods is done after a 1hrough study and ...

  7. The limits of simplification in translated isiZulu health texts | Ndlovu ...

    African Journals Online (AJOL)

    Simplification, defined as the practice of simplifying the language used in translation, is regarded as one of the universal features of translation. This article investigates the limitations of simplification encountered in efforts to make translated isiZulu health texts more accessible to the target readership. The focus is on public ...

  8. A New Approach in the Simplification of a Multiple-Beam Forming Network Based on CORPS Using Compressive Arrays

    Directory of Open Access Journals (Sweden)

    Armando Arce

    2012-01-01

    Full Text Available This research paper deals with a innovative way to simplify the design of beam-forming networks (BFNs for multibeam steerable antenna arrays based on coherently radiating periodic structures (CORPS technology using the noniterative matrix pencil method (MPM. This design approach is based on the application of the MPM to linear arrays fed by CORPS-BFN configurations to further reduce the complexity of the beam-forming network. Two 2-beam design configurations of CORPS-BFN for a steerable linear array are analyzed and compared using this compressive method. Simulation results show the effectiveness and advantages of applying the MPM on BFNs based on CORPS exploiting the nonuniformity of the antenna elements. Furthermore, final results show that the integration of CORPS-BFN and MPM reduces the entire antenna system including the antenna array and the beam-forming network subsystem resulting in a substantial simplification in such systems.

  9. Developing an ASD Macroeconomic Model of the Stock Approach - With Emphasis on Bank Lending and Interest Rates -

    OpenAIRE

    Yamaguchi, Yokei

    2017-01-01

    The financial crisis in 2008 evidenced an over-simplification of the role of banks made in the majority of macroeconomic models. Based on Accounting System Dynamics (ASD) modeling approach, the current research presents a model that incorporates banks as creators of deposits when making loans as opposed to the conventional view of banks as intermediaries of existing money. The generic model thus developed consists of five sectors; production, household, banking, government and central bank to...

  10. Computer control system synthesis for nuclear power plants through simplification and partitioning of the complex system model into a set of simple subsystems

    International Nuclear Information System (INIS)

    Zobor, E.

    1978-12-01

    The approach chosen is based on the hierarchical control systems theory, however, the fundamentals of other approaches such as the systems simplification and systems partitioning are briefly summarized for introducing the problems associated with the control of large scale systems. The concept of a hierarchical control system acting in broad variety of operating conditions is developed and some practical extensions to the hierarchical control system approach e.g. subsystems measured and controlled with different rates, control of the partial state vector, coordination for autoregressive models etc. are given. Throughout the work the WWR-SM research reactor of the Institute has been taken as a guiding example and simple methods for the identification of the model parameters from a reactor start-up are discussed. Using the PROHYS digital simulation program elaborated in the course of the present research, detailed simulation studies were carried out for investigating the performance of a control system based on the concept and algorithms developed. In order to give a real application evidence, a short description is finally given about the closed-loop computer control system installed - in the framework of a project supported by the Hungarian State Office for Technical Development - at the WWR-SM research reactor where the results obtained in the present IAEA Research Contract were successfully applied and furnished the expected high performance

  11. Computing Strongly Homotopic Line Simplification in the Plane

    DEFF Research Database (Denmark)

    Daneshpajou, Shervin; Abam, Mohammad; Deleuran, Lasse Kosetski

    We study a variant of the line-simplification problem where we are given a polygonal path P = p1 , p2 , . . . , pn and a set S of m point obstacles in a plane, and the goal is to find the optimal homotopic simplification, that is, a minimum subsequence Q = q1 , q2 , . . . , qk (q1 = p1 and qk = pn...... ) of P defining a polygonal path which approximates P within the given error ε and is homotopic to P . We assume all shortcuts pi,pj whose errors under a distance function F are at most ε can be computed in TF(n) time where TF(n) is polynomial for all widely-used distance functions. We define the new...

  12. Fast simulation approaches for power fluctuation model of wind farm based on frequency domain

    DEFF Research Database (Denmark)

    Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang

    2012-01-01

    This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... Under this background, this paper proposes four efficient approaches to accelerate the simulation speed. Two of them are based on physical model simplifications, and the other two improve the numerical computation. The case study demonstrates the efficiency of these approaches. The acceleration ratio...... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....

  13. Geological heterogeneity: Goal-oriented simplification of structure and characterization needs

    Science.gov (United States)

    Savoy, Heather; Kalbacher, Thomas; Dietrich, Peter; Rubin, Yoram

    2017-11-01

    Geological heterogeneity, i.e. the spatial variability of discrete hydrogeological units, is investigated in an aquifer analog of glacio-fluvial sediments to determine how such a geological structure can be simplified for characterization needs. The aquifer analog consists of ten hydrofacies whereas the scarcity of measurements in typical field studies precludes such detailed spatial models of hydraulic properties. Of particular interest is the role of connectivity of the hydrofacies structure, along with its effect on the connectivity of mass transport, in site characterization for predicting early arrival times. Transport through three realizations of the aquifer analog is modeled with numerical particle tracking to ascertain the fast flow channel through which early arriving particles travel. Three simplification schemes of two-facies models are considered to represent the aquifer analogs, and the velocity within the fast flow channel is used to estimate the apparent hydraulic conductivity of the new facies. The facies models in which the discontinuous patches of high hydraulic conductivity are separated from the rest of the domain yield the closest match in early arrival times compared to the aquifer analog, but assuming a continuous high hydraulic conductivity channel connecting these patches yields underestimated early arrivals times within the range of variability between the realizations, which implies that the three simplification schemes could be advised but pose different implications for field measurement campaigns. Overall, the results suggest that the result of transport connectivity, i.e. early arrival times, within realistic geological heterogeneity can be conserved even when the underlying structural connectivity is modified.

  14. Semi-analytical approach to modelling the dynamic behaviour of soil excited by embedded foundations

    DEFF Research Database (Denmark)

    Bucinskas, Paulius; Andersen, Lars Vabbersgaard

    2017-01-01

    The underlying soil has a significant effect on the dynamic behaviour of structures. The paper proposes a semi-analytical approach based on a Green’s function solution in frequency–wavenumber domain. The procedure allows calculating the dynamic stiffness for points on the soil surface as well...... are analysed. It is determined how simplification of the numerical model affects the overall dynamic behaviour. © 2017 The Authors. Published by Elsevier Ltd....

  15. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    , because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...

  16. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications; Storregional grundvattenmodellering - en kaenslighetsstudie av naagra utvalda konceptuella beskrivningar och foerenklingar

    Energy Technology Data Exchange (ETDEWEB)

    Ericsson, Lars O. (Lars O. Ericsson Consulting AB (Sweden)); Holmen, Johan (Golder Associates (Sweden))

    2010-12-15

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  17. The Study of Simplification and Explicitation Techniques in Khaled Hosseini's “A Thousand Splendid Suns”

    OpenAIRE

    Reza Kafipour

    2016-01-01

    Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translator...

  18. THE STUDY OF SIMPLIFICATION AND EXPLICITATION TECHNIQUES IN KHALED HOSSEINI'S “A THOUSAND SPLENDID SUNS”

    Directory of Open Access Journals (Sweden)

    Reza Kafipour

    2016-12-01

    Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.

  19. The Study of Simplification and Explicitation Techniques in Khaled Hosseini's “A Thousand Splendid Suns”

    Directory of Open Access Journals (Sweden)

    Reza Kafipour

    2016-12-01

    Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.

  20. Simplification of a dust emission scheme and comparison with data

    Science.gov (United States)

    Shao, Yaping

    2004-05-01

    A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.

  1. Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes

    Institute of Scientific and Technical Information of China (English)

    Muhammad Hussain

    2009-01-01

    Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.

  2. Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?

    Science.gov (United States)

    Sprintsin, M.; Chen, J. M.

    2009-05-01

    The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit

  3. Quantum copying and simplification of the quantum Fourier transform

    Science.gov (United States)

    Niu, Chi-Sheng

    Theoretical studies of quantum computation and quantum information theory are presented in this thesis. Three topics are considered: simplification of the quantum Fourier transform in Shor's algorithm, optimal eavesdropping in the BB84 quantum cryptographic protocol, and quantum copying of one qubit. The quantum Fourier transform preceding the final measurement in Shor's algorithm is simplified by replacing a network of quantum gates with one that has fewer and simpler gates controlled by classical signals. This simplification results from an analysis of the network using the consistent history approach to quantum mechanics. The optimal amount of information which an eavesdropper can gain, for a given level of noise in the communication channel, is worked out for the BB84 quantum cryptographic protocol. The optimal eavesdropping strategy is expressed in terms of various quantum networks. A consistent history analysis of these networks using two conjugate quantum bases shows how the information gain in one basis influences the noise level in the conjugate basis. The no-cloning property of quantum systems, which is the physics behind quantum cryptography, is studied by considering copying machines that generate two imperfect copies of one qubit. The best qualities these copies can have are worked out with the help of the Bloch sphere representation for one qubit, and a quantum network is worked out for an optimal copying machine. If the copying machine does not have additional ancillary qubits, the copying process can be viewed using a 2-dimensional subspace in a product space of two qubits. A special representation of such a two-dimensional subspace makes possible a complete characterization of this type of copying. This characterization in turn leads to simplified eavesdropping strategies in the BB84 and the B92 quantum cryptographic protocols.

  4. THE EFFECT OF TAX SIMPLIFICATION ON TAXPAYERS’ COMPLIANCE BEHAVIOR: RELIGIOSITY AS MODERATING VARIABLE

    Directory of Open Access Journals (Sweden)

    Muslichah Muslichah

    2017-03-01

    Full Text Available Tax compliance was an important issue for nations around the world as governments searched for revenue tomeet public needs. The importance of tax simplification had long been known as a determinant of compliancebehavior and it became an important issue in taxation research. The primary objective of this study was toinvestigate the effect of tax simplification and religiosity on compliance behavior. This study was conducted inMalang, East Java. Survey questionnaires were sent to 200 taxpayers and only 122 responded. Consistentwith the prior research, this study suggested that the effect of religiosity on compliance behavior was positiveand significant. Religiosity acted as moderating role on the relationship between tax simplification andcompliance behavior. This study was contributed to the compliance literature. The present study also providedpractical significance because the empirical result provided information about compliance behavior to helpgovernment to develop strategies toward increasing voluntary compliance.

  5. The cost of policy simplification in conservation incentive programs

    DEFF Research Database (Denmark)

    Armsworth, Paul R.; Acs, Szvetlana; Dallimer, Martin

    2012-01-01

    of biodiversity. Common policy simplifications result in a 49100% loss in biodiversity benefits depending on the conservation target chosen. Failure to differentiate prices for conservation improvements in space is particularly problematic. Additional implementation costs that accompany more complicated policies......Incentive payments to private landowners provide a common strategy to conserve biodiversity and enhance the supply of goods and services from ecosystems. To deliver cost-effective improvements in biodiversity, payment schemes must trade-off inefficiencies that result from over-simplified policies...... with the administrative burden of implementing more complex incentive designs. We examine the effectiveness of different payment schemes using field parameterized, ecological economic models of extensive grazing farms. We focus on profit maximising farm management plans and use bird species as a policy-relevant indicator...

  6. Cutting red tape: national strategies for administrative simplification

    National Research Council Canada - National Science Library

    Cerri, Fabienne; Hepburn, Glen; Barazzoni, Fiorenza

    2006-01-01

    ... when the topic was new, and had a strong focus on the tools used to simplify administrative regulations. Expectations are greater today, and ad hoc simplification initiatives have in many cases been replaced by comprehensive government programmes to reduce red tape. Some instruments, such as one-stop shops, which were new then, have become widely adop...

  7. Simplification of the helical TEN2 laser

    Science.gov (United States)

    Krahn, K.-H.

    1980-04-01

    The observation that the helical TEN2 laser can effectively be simplified by giving up the use of decoupling elements as well as by abolishing the segmentation of the electrode structure is examined. Although, as a consequence of this simplification, the operating pressure range was slightly decreased, the output power could be improved by roughly 30%, a result which is attributed to the new electrode geometry exhibiting lower inductance and lower damping losses.

  8. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation.

    Science.gov (United States)

    Langhans, Simone D; Lienert, Judit

    2016-01-01

    River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource

  9. Investigation on the optimal simplified model of BIW structure using FEM

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Shojaeefard

    Full Text Available Abstract At conceptual phases of designing a vehicle, engineers need simplified models to examine the structural and functional characteristics and apply custom modifications for achieving the best vehicle design. Using detailed finite-element (FE model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body-in-white (BIW, composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the extent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be regarded for simplification to optimize costs and time, while providing acceptable accuracy. Although different approaches for optimization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best models, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which examines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experimental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE-modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.

  10. Sutural simplification in Physodoceratinae (Aspidoceratidae, Ammonitina

    Directory of Open Access Journals (Sweden)

    Checa, A.

    1987-08-01

    Full Text Available The estructural analysis of the shell septum interrelationship in sorne Jurassic ammonites allows us to conclude that sutural simplifications occurred throughout the phylogeny, were originated by alterations in the external morphology of the shell. In the case of Physodoceratinae the simplification observed in the morphology of the septal suture may have a double origin. First, an increase in the size of periumbilical tubercles may determine a shallowing of sutural elements and a shortening of saddle and lobe frilling. In other cases, shallowing is determined by a decrease in the whorl expansion rate, an apparent shortening of secondary branching not being observed.El análisis estructural de la interrelación concha-septo en algunos ammonites del Jurásico superior lleva a concluir que las simplificaciones suturales aparecidas a lo largo de la filogenia fueron originadas por alteraciones ocurridas en la morfología externa de la concha. En el caso concreto de la subfamilia Physodoceratinae, la simplificación observada en la morfología de la sutura puede tener un doble origen. En primer lugar, un incremento en el tamaño de los tubérculos periumbilicales puede determinar una pérdida de profundidad de los elementos de la sutura. siempre acompañada de una disminución en las indentaciones (frilling de sillas y lóbulos. En otros casos el acortamiento en profundidad está determinado por una disminución de la tasa de expansión de la espira, sin que se observe un acortamiento aparente de las ramificaciones secundarias.

  11. Simplifications of the mini nutritional assessment short-form are predictive of mortality among hospitalized young and middle-aged adults.

    Science.gov (United States)

    Asiimwe, Stephen B

    2016-01-01

    Measuring malnutrition in hospitalized patients is difficult in all settings. I evaluated associations of items in the mini nutritional assessment short-form (MNA-sf), a nutritional-risk screening tool previously validated in the elderly, with malnutrition among hospitalized patients in Uganda. I used results to construct two simplifications of this tool that may be applicable to young and middle-aged adults. I assessed the association of each MNA-sf item with the mid-upper arm circumference (MUAC), a specific measure of malnutrition at appropriate cut-offs. I incorporated only malnutrition-specific items into the proposed simplifications scoring each item according to its association with malnutrition. I assessed numbers classified to different score-levels by the simplifications and, via proportional hazards regression, how the simplifications predicted in-hospital mortality. I analyzed 318 patients (median age 37, interquartile range 27 to 56). Variables making it into the simplifications were: reduced food intake, weight loss, mobility, and either BMI in kg/m(2) (categorized as age, sex, and HIV status. The MNA-sf simplifications described may provide an improved measure of malnutrition in hospitalized young and middle-aged adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Simplification of Home Cooking and Its Periphery

    OpenAIRE

    小住, フミ子; 北崎, 康子; Fumiko, OZUMI; Yasuko, KITAZAKI

    1997-01-01

    Sence of home cooking has been changing with the times. Various topics, which make us conscious of health and dietary habits, such as delicatessen, half-ready-made foods, eating out, and utilization of home delivery service and food imports are involved in those of simplification of cooking. We requested 64 students to fill in a questionnaire in three parts. The recovery was 96.4%. The results are as follows : The main reason for purchasing delicatessen or half-ready-made foods was that "they...

  13. 77 FR 21846 - Reserve Requirements of Depository Institutions: Reserves Simplification

    Science.gov (United States)

    2012-04-12

    ... Requirements of Depository Institutions: Reserves Simplification AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final rule. SUMMARY: The Board is amending Regulation D, Reserve Requirements of Depository Institutions, to simplify the administration of reserve requirements. The final rule creates a...

  14. THE ELITISM OF LEGAL LANGUAGE AND THE NEED OF SIMPLIFICATION

    Directory of Open Access Journals (Sweden)

    Antonio Escandiel de Souza

    2016-12-01

    Full Text Available This article presents the results of the research project entitled “Simplification of legal language: a study on the view of the academic community of the University of Cruz Alta”. It is a qualitative nature study on simplifying the legal language as a means of democratizing/pluralize access to justice, in the view of scholars and Law Course teachers. There is great difficulty by society in the understanding of legal terms, which hinders access to justice. Similarly, the legal field is not far, of their traditional formalities, which indicates the existence of a parallel where, on one hand, is society, with its problems of understanding, and the other the law, its inherent and intrinsic procedures. However, the company may not have access to the judiciary hampered on account of formalities arising from the law and its flowery language. Preliminary results indicate simplification of legal language as essential to real democratization of access to Law/Justice.

  15. Utilizing 'hot words' in ParaConc to verify lexical simplification ...

    African Journals Online (AJOL)

    Lexical simplification strategies investigated are: using a superordinate or more general word, using a general word with extended meaning and using more familiar or common synonyms. The analysis gives the reader an idea about how some general words are used to translate technical language. It also displays that 'hot ...

  16. A New Algorithm for Cartographic Simplification of Streams and Lakes Using Deviation Angles and Error Bands

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-10-01

    Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.

  17. Treatment simplification in HIV-infected adults as a strategy to prevent toxicity, improve adherence, quality of life and decrease healthcare costs

    Directory of Open Access Journals (Sweden)

    Vitória M

    2011-07-01

    Full Text Available Jean B Nachega1–3, Michael J Mugavero4, Michele Zeier2, Marco Vitória5, Joel E Gallant3,61Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 2Department of Medicine and Centre for Infectious Diseases (CID, Stellenbosch University, Faculty of Health Sciences, Cape Town, South Africa; 3Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 4Division of Infectious Diseases, Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA; 5HIV Department, World Health Organization, Geneva, Switzerland; 6Department of Medicine, Division of Infectious Diseases, Johns Hopkins University School of Medicine, Baltimore, MD, USAAbstract: Since the advent of highly active antiretroviral therapy (HAART, the treatment of human immunodeficiency virus (HIV infection has become more potent and better tolerated. While the current treatment regimens still have limitations, they are more effective, more convenient, and less toxic than regimens used in the early HAART era, and new agents, formulations and strategies continue to be developed. Simplification of therapy is an option for many patients currently being treated with antiretroviral therapy (ART. The main goals are to reduce pill burden, improve quality of life and enhance medication adherence, while minimizing short- and long-term toxicities, reducing the risk of virologic failure and maximizing cost-effectiveness. ART simplification strategies that are currently used or are under study include the use of once-daily regimens, less toxic drugs, fixed-dose coformulations and induction-maintenance approaches. Improved adherence and persistence have been observed with the adoption of some of these strategies. The role of regimen simplification has implications not only for individual patients, but also for health care policy. With increased interest in ART regimen simplification, it is critical to

  18. Application of a power plant simplification methodology: The example of the condensate feedwater system

    International Nuclear Information System (INIS)

    Seong, P.H.; Manno, V.P.; Golay, M.W.

    1988-01-01

    A novel framework for the systematic simplification of power plant design is described with a focus on the application for the optimization of condensate feedwater system (CFWS) design. The evolution of design complexity of CFWS is reviewed with emphasis upon the underlying optimization process. A new evaluation methodology which includes explicit accounting of human as well as mechanical effects upon system availability is described. The unifying figure of merit for an operating system is taken to be net electricity production cost. The evaluation methodology is applied to the comparative analysis of three designs. In the illustrative examples, the results illustrate how inclusion in the evaluation of explicit availability related costs leads to optimal configurations. These are different from those of current system design practices in that thermodynamic efficiency and capital cost optimization are not overemphasized. Rather a more complete set of design-dependent variables is taken into account, and other important variables which remain neglected in current practices are identified. A critique of the new optimization approach and a discussion of future work areas including improved human performance modeling and different optimization constraints are provided. (orig.)

  19. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  20. Hybrid approach for the assessment of PSA models by means of binary decision diagrams

    International Nuclear Information System (INIS)

    Ibanez-Llano, Cristina; Rauzy, Antoine; Melendez, Enrique; Nieto, Francisco

    2010-01-01

    Binary decision diagrams are a well-known alternative to the minimal cutsets approach to assess the reliability Boolean models. They have been applied successfully to improve the fault trees models assessment. However, its application to solve large models, and in particular the event trees coming from the PSA studies of the nuclear industry, remains to date out of reach of an exact evaluation. For many real PSA models it may be not possible to compute the BDD within reasonable amount of time and memory without considering the truncation or simplification of the model. This paper presents a new approach to estimate the exact probabilistic quantification results (probability/frequency) based on combining the calculation of the MCS and the truncation limits, with the BDD approach, in order to have a better control on the reduction of the model and to properly account for the success branches. The added value of this methodology is that it is possible to ensure a real confidence interval of the exact value and therefore an explicit knowledge of the error bound. Moreover, it can be used to measure the acceptability of the results obtained with traditional techniques. The new method was applied to a real life PSA study and the results obtained confirm the applicability of the methodology and open a new viewpoint for further developments.

  1. Hybrid approach for the assessment of PSA models by means of binary decision diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez-Llano, Cristina, E-mail: cristina.ibanez@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain); Rauzy, Antoine, E-mail: Antoine.RAUZY@3ds.co [Dassault Systemes, 10 rue Marcel Dassault CS 40501, 78946 Velizy Villacoublay Cedex (France); Melendez, Enrique, E-mail: ema@csn.e [Consejo de Seguridad Nuclear (CSN), C/Justo Dorado 11, 28040 Madrid (Spain); Nieto, Francisco, E-mail: nieto@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain)

    2010-10-15

    Binary decision diagrams are a well-known alternative to the minimal cutsets approach to assess the reliability Boolean models. They have been applied successfully to improve the fault trees models assessment. However, its application to solve large models, and in particular the event trees coming from the PSA studies of the nuclear industry, remains to date out of reach of an exact evaluation. For many real PSA models it may be not possible to compute the BDD within reasonable amount of time and memory without considering the truncation or simplification of the model. This paper presents a new approach to estimate the exact probabilistic quantification results (probability/frequency) based on combining the calculation of the MCS and the truncation limits, with the BDD approach, in order to have a better control on the reduction of the model and to properly account for the success branches. The added value of this methodology is that it is possible to ensure a real confidence interval of the exact value and therefore an explicit knowledge of the error bound. Moreover, it can be used to measure the acceptability of the results obtained with traditional techniques. The new method was applied to a real life PSA study and the results obtained confirm the applicability of the methodology and open a new viewpoint for further developments.

  2. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about the ...

  3. Adaptive simplification and the evolution of gecko locomotion: Morphological and biomechanical consequences of losing adhesion

    Science.gov (United States)

    Higham, Timothy E.; Birn-Jeffery, Aleksandra V.; Collins, Clint E.; Hulsey, C. Darrin; Russell, Anthony P.

    2015-01-01

    Innovations permit the diversification of lineages, but they may also impose functional constraints on behaviors such as locomotion. Thus, it is not surprising that secondary simplification of novel locomotory traits has occurred several times among vertebrates and could potentially lead to exceptional divergence when constraints are relaxed. For example, the gecko adhesive system is a remarkable innovation that permits locomotion on surfaces unavailable to other animals, but has been lost or simplified in species that have reverted to a terrestrial lifestyle. We examined the functional and morphological consequences of this adaptive simplification in the Pachydactylus radiation of geckos, which exhibits multiple unambiguous losses or bouts of simplification of the adhesive system. We found that the rates of morphological and 3D locomotor kinematic evolution are elevated in those species that have simplified or lost adhesive capabilities. This finding suggests that the constraints associated with adhesion have been circumvented, permitting these species to either run faster or burrow. The association between a terrestrial lifestyle and the loss/reduction of adhesion suggests a direct link between morphology, biomechanics, and ecology. PMID:25548182

  4. Ecosystem simplification, biodiversity loss and plant virus emergence.

    Science.gov (United States)

    Roossinck, Marilyn J; García-Arenal, Fernando

    2015-02-01

    Plant viruses can emerge into crops from wild plant hosts, or conversely from domestic (crop) plants into wild hosts. Changes in ecosystems, including loss of biodiversity and increases in managed croplands, can impact the emergence of plant virus disease. Although data are limited, in general the loss of biodiversity is thought to contribute to disease emergence. More in-depth studies have been done for human viruses, but studies with plant viruses suggest similar patterns, and indicate that simplification of ecosystems through increased human management may increase the emergence of viral diseases in crops. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  6. Performance of fuzzy approach in Malaysia short-term electricity load forecasting

    Science.gov (United States)

    Mansor, Rosnalini; Zulkifli, Malina; Yusof, Muhammad Mat; Ismail, Mohd Isfahani; Ismail, Suzilah; Yin, Yip Chee

    2014-12-01

    Many activities such as economic, education and manafucturing would paralyse with limited supply of electricity but surplus contribute to high operating cost. Therefore electricity load forecasting is important in order to avoid shortage or excess. Previous finding showed festive celebration has effect on short-term electricity load forecasting. Being a multi culture country Malaysia has many major festive celebrations such as Eidul Fitri, Chinese New Year and Deepavali but they are moving holidays due to non-fixed dates on the Gregorian calendar. This study emphasis on the performance of fuzzy approach in forecasting electricity load when considering the presence of moving holidays. Autoregressive Distributed Lag model was estimated using simulated data by including model simplification concept (manual or automatic), day types (weekdays or weekend), public holidays and lags of electricity load. The result indicated that day types, public holidays and several lags of electricity load were significant in the model. Overall, model simplification improves fuzzy performance due to less variables and rules.

  7. New helical-shape magnetic pole design for Magnetic Lead Screw enabling structure simplification

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Xia, Yongming; Wu, Weimin

    2015-01-01

    Magnetic lead screw (MLS) is a new type of high performance linear actuator that is attractive for many potential applications. The main difficulty of the MLS technology lies in the manufacturing of its complicated helical-shape magnetic poles. Structure simplification is, therefore, quite...

  8. The minimum attention plant inherent safety through LWR simplification

    International Nuclear Information System (INIS)

    Turk, R.S.; Matzie, R.A.

    1987-01-01

    The Minimum Attention Plant (MAP) is a unique small LWR that achieves greater inherent safety, improved operability, and reduced costs through design simplification. The MAP is a self-pressurized, indirect-cycle light water reactor with full natural circulation primary coolant flow and multiple once-through steam generators located within the reactor vessel. A fundamental tenent of the MAP design is its complete reliance on existing LWR technology. This reliance on conventional technology provides an extensive experience base which gives confidence in judging the safety and performance aspects of the design

  9. 76 FR 64250 - Reserve Requirements of Depository Institutions: Reserves Simplification and Private Sector...

    Science.gov (United States)

    2011-10-18

    ... Simplification and Private Sector Adjustment Factor AGENCY: Board of Governors of the Federal Reserve System... comment on several issues related to the methodology used for the Private Sector Adjustment Factor that is... Analyst (202) 452- 3674, Division of Monetary Affairs, or, for questions regarding the Private Sector...

  10. Steady state HNG combustion modeling

    Energy Technology Data Exchange (ETDEWEB)

    Louwers, J.; Gadiot, G.M.H.J.L. [TNO Prins Maurits Lab., Rijswijk (Netherlands); Brewster, M.Q. [Univ. of Illinois, Urbana, IL (United States); Son, S.F. [Los Alamos National Lab., NM (United States); Parr, T.; Hanson-Parr, D. [Naval Air Warfare Center, China Lake, CA (United States)

    1998-04-01

    Two simplified modeling approaches are used to model the combustion of Hydrazinium Nitroformate (HNF, N{sub 2}H{sub 5}-C(NO{sub 2}){sub 3}). The condensed phase is treated by high activation energy asymptotics. The gas phase is treated by two limit cases: the classical high activation energy, and the recently introduced low activation energy approach. This results in simplification of the gas phase energy equation, making an (approximate) analytical solution possible. The results of both models are compared with experimental results of HNF combustion. It is shown that the low activation energy approach yields better agreement with experimental observations (e.g. regression rate and temperature sensitivity), than the high activation energy approach.

  11. Perceptual Recovery from Consonant-Cluster Simplification in Korean Using Language-Specific Phonological Knowledge

    NARCIS (Netherlands)

    Cho, T.; McQueen, J.M.

    2011-01-01

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for

  12. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about...... of the temperature on the mechanical resonance frequency is considered and thereby integrated in the final model for long term operations....

  13. Population models for Greater Snow Geese: a comparison of different approaches to assess potential impacts of harvest

    Directory of Open Access Journals (Sweden)

    Gauthier, G.

    2004-06-01

    Full Text Available Demographic models, which are a natural extension of capture-recapture (CR methodology, are a powerful tool to guide decisions when managing wildlife populations. We compare three different modelling approaches to evaluate the effect of increased harvest on the population growth of Greater Snow Geese (Chen caerulescens atlantica. Our first approach is a traditional matrix model where survival was reduced to simulate increased harvest. We included environmental stochasticity in the matrix projection model by simulating good, average, and bad years to account for the large inter-annual variation in fecundity and first-year survival, a common feature of birds nesting in the Arctic. Our second approach is based on the elasticity (or relative sensitivity of population growth rate (lambda to changes in survival as simple functions of generation time. Generation time was obtained from the mean transition matrix based on the observed proportion of good, average and bad years between 1985 and 1998. If we assume that hunting mortality is additive to natural mortality, then a simple formula predicts changes in lambda as a function of changes in harvest rate. This second approach can be viewed as a simplification of the matrix model because it uses formal sensitivity results derived from population projection. Our third, and potentially more powerful approach, uses the Kalman Filter to combine information on demographic parameters, i.e. the population mechanisms summarized in a transition matrix model, and the census information (i.e. annual survey within an overall Gaussian likelihood. The advantage of this approach is that it minimizes process and measured uncertainties associated with both the census and demographic parameters based on the variance of each estimate. This third approach, in contrast to the second, can be viewed as an extension of the matrix model, by combining its results with the independent census information.

  14. Analysis of Simplifications Applied in Vibration Damping Modelling for a Passive Car Shock Absorber

    Directory of Open Access Journals (Sweden)

    Łukasz Konieczny

    2016-01-01

    Full Text Available The paper presents results of research on hydraulic automotive shock absorbers. The considerations provided in the paper indicate certain flaws and simplifications resulting from the fact that damping characteristics are assumed as the function of input velocity only, which is the case of simulation studies. An important aspect taken into account when determining parameters of damping performed by car shock absorbers at a testing station is the permissible range of characteristics of a shock absorber of the same type. The aim of this study was to determine the damping characteristics entailing the stroke value. The stroke and rotary velocities were selected in a manner enabling that, for different combinations, the same maximum linear velocity can be obtained. Thus the influence of excitation parameters, such as the stroke value, on force versus displacement and force versus velocity diagrams was determined. The 3D characteristics presented as the damping surface in the stoke and the linear velocity function were determined. An analysis of the results addressed in the paper highlights the impact of such factors on the profile of closed loop graphs of damping forces and point-type damping characteristics.

  15. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  16. Baseline natural killer and T cell populations correlation with virologic outcome after regimen simplification to atazanavir/ritonavir alone (ACTG 5201.

    Directory of Open Access Journals (Sweden)

    John E McKinnon

    Full Text Available Simplified maintenance therapy with ritonavir-boosted atazanavir (ATV/r provides an alternative treatment option for HIV-1 infection that spares nucleoside analogs (NRTI for future use and decreased toxicity. We hypothesized that the level of immune activation (IA and recovery of lymphocyte populations could influence virologic outcomes after regimen simplification.Thirty-four participants with virologic suppression ≥ 48 weeks on antiretroviral therapy (2 NRTI plus protease inhibitor were switched to ATV/r alone in the context of the ACTG 5201 clinical trial. Flow cytometric analyses were performed on PBMC isolated from 25 patients with available samples, of which 24 had lymphocyte recovery sufficient for this study. Assessments included enumeration of T-cells (CD4/CD8, natural killer (NK (CD3+CD56+CD16+ cells and cell-associated markers (HLA-DR, CD's 38/69/94/95/158/279.Eight of the 24 patients had at least one plasma HIV-1 RNA level (VL >50 copies/mL during the study. NK cell levels below the group median of 7.1% at study entry were associated with development of VL >50 copies/mL following simplification by regression and survival analyses (p = 0.043 and 0.023, with an odds ratio of 10.3 (95% CI: 1.92-55.3. Simplification was associated with transient increases in naïve and CD25+ CD4+ T-cells, and had no impact on IA levels.Lower NK cell levels prior to regimen simplification were predictive of virologic rebound after discontinuation of nucleoside analogs. Regimen simplification did not have a sustained impact on markers of IA or T lymphocyte populations in 48 weeks of clinical monitoring.ClinicalTrials.gov NCT00084019.

  17. Is Dysfunctional Use of the Mobile Phone a Behavioural Addiction? Confronting Symptom-Based Versus Process-Based Approaches.

    Science.gov (United States)

    Billieux, Joël; Philippot, Pierre; Schmid, Cécile; Maurage, Pierre; De Mol, Jan; Van der Linden, Martial

    2015-01-01

    Dysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance. The addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Modelling chemical behavior of water reactor fuel

    Energy Technology Data Exchange (ETDEWEB)

    Ball, R G.J.; Hanshaw, J; Mason, P K; Mignanelli, M A [AEA Technology, Harwell (United Kingdom)

    1997-08-01

    For many applications, large computer codes have been developed which use correlation`s, simplifications and approximations in order to describe the complex situations which may occur during the operation of nuclear power plant or during fault scenarios. However, it is important to have a firm physical basis for simplifications and approximations in such codes and, therefore, there has been an emphasis on modelling the behaviour of materials and processes on a more detailed or fundamental basis. The application of fundamental modelling techniques to simulated various chemical phenomena in thermal reactor fuel systems are described in this paper. These methods include thermochemical modelling, kinetic and mass transfer modelling and atomistic simulation and examples of each approach are presented. In each of these applications a summary of the methods are discussed together with the assessment process adopted to provide the fundamental parameters which form the basis of the calculation. (author). 25 refs, 9 figs, 2 tabs.

  19. Pathways of DNA unlinking: A story of stepwise simplification.

    Science.gov (United States)

    Stolz, Robert; Yoshida, Masaaki; Brasher, Reuben; Flanner, Michelle; Ishihara, Kai; Sherratt, David J; Shimokawa, Koya; Vazquez, Mariel

    2017-09-29

    In Escherichia coli DNA replication yields interlinked chromosomes. Controlling topological changes associated with replication and returning the newly replicated chromosomes to an unlinked monomeric state is essential to cell survival. In the absence of the topoisomerase topoIV, the site-specific recombination complex XerCD- dif-FtsK can remove replication links by local reconnection. We previously showed mathematically that there is a unique minimal pathway of unlinking replication links by reconnection while stepwise reducing the topological complexity. However, the possibility that reconnection preserves or increases topological complexity is biologically plausible. In this case, are there other unlinking pathways? Which is the most probable? We consider these questions in an analytical and numerical study of minimal unlinking pathways. We use a Markov Chain Monte Carlo algorithm with Multiple Markov Chain sampling to model local reconnection on 491 different substrate topologies, 166 knots and 325 links, and distinguish between pathways connecting a total of 881 different topologies. We conclude that the minimal pathway of unlinking replication links that was found under more stringent assumptions is the most probable. We also present exact results on unlinking a 6-crossing replication link. These results point to a general process of topology simplification by local reconnection, with applications going beyond DNA.

  20. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  1. A Gaussian graphical model approach to climate networks

    International Nuclear Information System (INIS)

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-01-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately

  2. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  3. A reduction approach to improve the quantification of linked fault trees through binary decision diagrams

    International Nuclear Information System (INIS)

    Ibanez-Llano, Cristina; Rauzy, Antoine; Melendez, Enrique; Nieto, Francisco

    2010-01-01

    Over the last two decades binary decision diagrams have been applied successfully to improve Boolean reliability models. Conversely to the classical approach based on the computation of the MCS, the BDD approach involves no approximation in the quantification of the model and is able to handle correctly negative logic. However, when models are sufficiently large and complex, as for example the ones coming from the PSA studies of the nuclear industry, it begins to be unfeasible to compute the BDD within a reasonable amount of time and computer memory. Therefore, simplification or reduction of the full model has to be considered in some way to adapt the application of the BDD technology to the assessment of such models in practice. This paper proposes a reduction process based on using information provided by the set of the most relevant minimal cutsets of the model in order to perform the reduction directly on it. This allows controlling the degree of reduction and therefore the impact of such simplification on the final quantification results. This reduction is integrated in an incremental procedure that is compatible with the dynamic generation of the event trees and therefore adaptable to the recent dynamic developments and extensions of the PSA studies. The proposed method has been applied to a real case study, and the results obtained confirm that the reduction enables the BDD computation while maintaining accuracy.

  4. A reduction approach to improve the quantification of linked fault trees through binary decision diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez-Llano, Cristina, E-mail: cristina.ibanez@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain); Rauzy, Antoine, E-mail: Antoine.RAUZY@3ds.co [Dassault Systemes, 10 rue Marcel Dassault CS 40501, 78946 Velizy Villacoublay, Cedex (France); Melendez, Enrique, E-mail: ema@csn.e [Consejo de Seguridad Nuclear (CSN), C/Justo Dorado 11, 28040 Madrid (Spain); Nieto, Francisco, E-mail: nieto@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain)

    2010-12-15

    Over the last two decades binary decision diagrams have been applied successfully to improve Boolean reliability models. Conversely to the classical approach based on the computation of the MCS, the BDD approach involves no approximation in the quantification of the model and is able to handle correctly negative logic. However, when models are sufficiently large and complex, as for example the ones coming from the PSA studies of the nuclear industry, it begins to be unfeasible to compute the BDD within a reasonable amount of time and computer memory. Therefore, simplification or reduction of the full model has to be considered in some way to adapt the application of the BDD technology to the assessment of such models in practice. This paper proposes a reduction process based on using information provided by the set of the most relevant minimal cutsets of the model in order to perform the reduction directly on it. This allows controlling the degree of reduction and therefore the impact of such simplification on the final quantification results. This reduction is integrated in an incremental procedure that is compatible with the dynamic generation of the event trees and therefore adaptable to the recent dynamic developments and extensions of the PSA studies. The proposed method has been applied to a real case study, and the results obtained confirm that the reduction enables the BDD computation while maintaining accuracy.

  5. Emergency planning simplification: Why ALWR designs shall support this goal

    International Nuclear Information System (INIS)

    Tripputi, I.

    2004-01-01

    Emergency Plan simplification, could be achieved only if it can proved, in a context of balanced national health protection policies, that there is a reduced or no technical need for some elements of it and that public protection is assured in all considered situations regardless of protective actions outside the plant. These objectives may be technically supported if one or more of the following conditions are complied with: 1. Accidents potentially releasing large amounts of fission products can be ruled out by characteristics of the designs 2. Plant engineered features (and the containment system in particular) are able to drastically mitigate the radioactive releases under all conceivable scenarios. 3. A realistic approach to the consequence evaluation can reduce the expected consequences to effects below any concern. Unfortunately no one single approach is either technically feasible or justified in a perspective of defense in depth and only a mix of them may provide the necessary conditions. It appears that most or all proposed ALWR designs address the technical issues, whose solutions are the bases to eliminate the need for a number of protective actions (evacuation, relocation, sheltering, iodine tablets administration, etc.) even in the case of a severe accident. Some designs are mainly oriented to prevent the need for short term protective actions; they credit simplified Emergency Plans or the capabilities of existing civil protection organizations for public relocation in the long term, if needed. Others take also into account the overall releases to exclude or minimize public relocation and land contamination. Design targets for population individual doses and for land contamination proposed in Italy are discussed in the paper. It is also shown that these limits, while challenging, appear to be within the reach of the next generation proposed designs currently studied in Italy. (author)

  6. Iwamoto-Harada coalescence/pickup model for cluster emission: state density approach including angular momentum variables

    Directory of Open Access Journals (Sweden)

    Běták Emil

    2014-04-01

    Full Text Available For low-energy nuclear reactions well above the resonance region, but still below the pion threshold, statistical pre-equilibrium models (e.g., the exciton and the hybrid ones are a frequent tool for analysis of energy spectra and the cross sections of cluster emission. For α’s, two essentially distinct approaches are popular, namely the preformed one and the different versions of coalescence approaches, whereas only the latter group of models can be used for other types of cluster ejectiles. The original Iwamoto-Harada model of pre-equilibrium cluster emission was formulated using the overlap of the cluster and its constituent nucleons in momentum space. Transforming it into level or state densities is not a straigthforward task; however, physically the same model was presented at a conference on reaction models five years earlier. At that time, only the densities without spin were used. The introduction of spin variables into the exciton model enabled detailed calculation of the γ emission and its competition with nucleon channels, and – at the same time – it stimulated further developments of the model. However – to the best of our knowledge – no spin formulation has been presented for cluster emission till recently, when the first attempts have been reported, but restricted to the first emission only. We have updated this effort now and we are able to handle (using the same simplifications as in our previous work pre-equilibrium cluster emission with spin including all nuclei in the reaction chain.

  7. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    Science.gov (United States)

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  8. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    Science.gov (United States)

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A new model for quantum games based on the Marinatto–Weber approach

    International Nuclear Information System (INIS)

    Frąckiewicz, Piotr

    2013-01-01

    The Marinatto–Weber approach to quantum games is a straightforward way to apply the power of quantum mechanics to classical game theory. In the simplest case, the quantum scheme is that players manipulate their own qubits of a two-qubit state either with the identity 1 or the Pauli operator σ x . However, such a simplification of the scheme raises doubt as to whether it could really reflect a quantum game. In this paper we put forward examples which may constitute arguments against the present form of the Marinatto–Weber scheme. Next, we modify the scheme to eliminate the undesirable properties of the protocol by extending the players’ strategy sets. (paper)

  10. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  11. Reference Models for Multi-Layer Tissue Structures

    Science.gov (United States)

    2016-09-01

    function of multi-layer tissues (etiology and management of pressure ulcers ). What was the impact on other disciplines? As part of the project, a data...simplification to develop cost -effective models of surface manipulation of multi-layer tissues. Deliverables. Specimen- (or subject) and region-specific...simplification to develop cost -effective models of surgical manipulation. Deliverables. Specimen-specific surrogate models of upper legs confirmed against data

  12. Simplification of arboreal marsupial assemblages in response to increasing urbanization.

    Science.gov (United States)

    Isaac, Bronwyn; White, John; Ierodiaconou, Daniel; Cooke, Raylene

    2014-01-01

    Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density) but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.

  13. A matricial approach for the Dirac-Kahler formalism

    International Nuclear Information System (INIS)

    Goto, M.

    1987-01-01

    A matricial approach for the Dirac-Kahler formalism is considered. It is shown that the matrical approach i) brings a great computational simplification compared to the common use of differential forms and that ii) by an appropriate choice of notation, it can be extended to the lattice, including a matrix Dirac-Kahler equation. (author) [pt

  14. One-dimensional models for mountain-river morphology

    NARCIS (Netherlands)

    Sieben, A.

    1996-01-01

    In this report, some classical and new simplifications in mathematical and numerical models for river morphology are compared for conditions representing rivers in mountainous areas (high values of Froude numbers and relatively large values of sediment transport rates). Options for simplification

  15. Bridging analytical approaches for low-carbon transitions

    Science.gov (United States)

    Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.

    2016-06-01

    Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.

  16. Assessing the Impact of Canopy Structure Simplification in Common Multilayer Models on Irradiance Absorption Estimates of Measured and Virtually Created Fagus sylvatica (L. Stands

    Directory of Open Access Journals (Sweden)

    Pol Coppin

    2009-11-01

    Full Text Available Multilayer canopy representations are the most common structural stand representations due to their simplicity. Implementation of recent advances in technology has allowed scientists to simulate geometrically explicit forest canopies. The effect of simplified representations of tree architecture (i.e., multilayer representations of four Fagus sylvatica (L. stands, each with different LAI, on the light absorption estimates was assessed in comparison with explicit 3D geometrical stands. The absorbed photosynthetic radiation at stand level was calculated. Subsequently, each geometrically explicit 3D stand was compared with three multilayer models representing horizontal, uniform, and planophile leaf angle distributions. The 3D stands were created either by in situ measured trees or by modelled trees generated with the AMAP plant growth software. The Physically Based Ray Tracer (PBRT algorithm was used to simulate the irradiance absorbance of the detailed 3D architecture stands, while for the three multilayer representations, the probability of light interception was simulated by applying the Beer-Lambert’s law. The irradiance inside the canopies was characterized as direct, diffuse and scattered irradiance. The irradiance absorbance of the stands was computed during eight angular sun configurations ranging from 10° (near nadir up to 80° sun zenith angles. Furthermore, a leaf stratification (the number and angular distribution of leaves per LAI layer inside a canopy analysis between the 3D stands and the multilayer representations was performed, indicating the amount of irradiance each leaf is absorbing along with the percentage of sunny and shadow leaves inside the canopy. The results reveal that a multilayer representation of a stand, using a multilayer modelling approach, greatly overestimated the absorbed irradiance in an open canopy, while it provided a better approximation in the case of a closed canopy. Moreover, the actual stratification

  17. Subthalamic stimulation: toward a simplification of the electrophysiological procedure.

    Science.gov (United States)

    Fetter, Damien; Derrey, Stephane; Lefaucheur, Romain; Borden, Alaina; Wallon, David; Chastan, Nathalie; Maltete, David

    2016-06-01

    The aim of the present study was to assess the consequences of a simplification of the electrophysiological procedure on the post-operative clinical outcome after subthalamic nucleus implantation in Parkinson disease. Microelectrode recordings were performed on 5 parallel trajectories in group 1 and less than 5 trajectories in group 2. Clinical evaluations were performed 1 month before and 6 months after surgery. After surgery, the UPDRS III score in the off-drug/on-stimulation and on-drug/on-stimulation conditions significantly improved by 66,9% and 82%, respectively in group 1, and by 65.8% and 82.3% in group 2 (P<0.05). Meanwhile, the total number of words (P<0.05) significantly decreased for fluency tasks in both groups. Motor disability improvement and medication reduction were similar in both groups. Our results suggest that the electrophysiological procedure should be simplified as the team's experience increases.

  18. A heuristic approach for short-term operations planning in a catering company

    DEFF Research Database (Denmark)

    Farahani, Poorya; Grunow, Martin; Günther, H.O.

    2009-01-01

    Certain types of food such as catering foods decay very rapidly. This paper investigates how the quality of such foods can be improved by shortening the time interval between production and delivery. To this end, we develop an approach which integrates short-term production and distribution...... planning in a novel iterative scheme. The production scheduling problem is solved through an MILP modeling approach which is based on a block planning formulation complemented by a heuristic simplification procedure. Our investigation was motivated by a catering company located in Denmark. The production...... configuration and the processes assumed in our numerical experiments reflect real settings from this company. First numerical results are reported which demonstrate the applicability of the proposed approach....

  19. Phonological simplifications, apraxia of speech and the interaction between phonological and phonetic processing.

    Science.gov (United States)

    Galluzzi, Claudia; Bureca, Ivana; Guariglia, Cecilia; Romani, Cristina

    2015-05-01

    Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels - but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Simplification of arboreal marsupial assemblages in response to increasing urbanization.

    Directory of Open Access Journals (Sweden)

    Bronwyn Isaac

    Full Text Available Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.

  1. Using subdivision surfaces and adaptive surface simplification algorithms for modeling chemical heterogeneities in geophysical flows

    Science.gov (United States)

    Schmalzl, JöRg; Loddoch, Alexander

    2003-09-01

    We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.

  2. On the role of model structure in hydrological modeling : Understanding models

    NARCIS (Netherlands)

    Gharari, S.

    2016-01-01

    Modeling is an essential part of the science of hydrology. Models enable us to formulate what we know and perceive from the real world into a neat package. Rainfall-runoff models are abstract simplifications of how a catchment works. Within the research field of scientific rainfall-runoff modeling,

  3. Adaptation of the model of tunneling in a metal/CaF{sub 2}/Si(111) system for use in industrial simulators of MIS devices

    Energy Technology Data Exchange (ETDEWEB)

    Vexler, M. I., E-mail: shulekin@mail.ioffe.ru; Illarionov, Yu. Yu.; Tyaginov, S. E. [Russian Academy of Sciences, Ioffe Physical-Technical Institute (Russian Federation); Grasser, T. [Institute for Microelectronics, TU Vienna (Austria)

    2015-02-15

    An approach toward simplification of the model of the tunneling transport of electrons through a thin layer of crystalline calcium fluoride into a silicon (111) substrate with subsequent implementation in simulators of semiconductor devices is suggested. The validity of the approach is proven by comparing the results of modeling using simplified formulas with the results of precise calculations and experimental data. The approach can be applied to calculations of tunneling currents in structures with any crystalline insulators on Si (111)

  4. Simplification of antiretroviral therapy: a necessary step in the public health response to HIV/AIDS in resource-limited settings.

    Science.gov (United States)

    Vitoria, Marco; Ford, Nathan; Doherty, Meg; Flexner, Charles

    2014-01-01

    The global scale-up of antiretroviral therapy (ART) over the past decade represents one of the great public health and human rights achievements of recent times. Moving from an individualized treatment approach to a simplified and standardized public health approach has been critical to ART scale-up, simplifying both prescribing practices and supply chain management. In terms of the latter, the risk of stock-outs can be reduced and simplified prescribing practices support task shifting of care to nursing and other non-physician clinicians; this strategy is critical to increase access to ART care in settings where physicians are limited in number. In order to support such simplification, successive World Health Organization guidelines for ART in resource-limited settings have aimed to reduce the number of recommended options for first-line ART in such settings. Future drug and regimen choices for resource-limited settings will likely be guided by the same principles that have led to the recommendation of a single preferred regimen and will favour drugs that have the following characteristics: minimal risk of failure, efficacy and tolerability, robustness and forgiveness, no overlapping resistance in treatment sequencing, convenience, affordability, and compatibility with anti-TB and anti-hepatitis treatments.

  5. Greatest Happiness Principle in a Complex System Approach

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available The principle of greatest happiness was the basis of ethics in Plato’s and Aristotle’s work, it served as the basis of utility principle in economics, and the happiness research has become a hot topic in social sciences in Western countries in particular in economics recently. Nevertheless there is a considerable scientific pessimism over whether it is even possible to affect sustainable increases in happiness.In this paper we outline an economic theory of decision based on the greatest happiness principle (GHP. Modern equilibrium economics is a simple system simplification of the GHP, the complex approach outlines a non-equilibrium economic theory. The comparison of the approaches reveals the fact that the part of the results – laws of modern economics – follow from the simplifications and they are against the economic nature. The most important consequence is that within the free market economy one cannot be sure that the path found by it leads to a beneficial economic system.

  6. CSP-based chemical kinetics mechanisms simplification strategy for non-premixed combustion: An application to hybrid rocket propulsion

    KAUST Repository

    Ciottoli, Pietro P.

    2017-08-14

    A set of simplified chemical kinetics mechanisms for hybrid rocket applications using gaseous oxygen (GOX) and hydroxyl-terminated polybutadiene (HTPB) is proposed. The starting point is a 561-species, 2538-reactions, detailed chemical kinetics mechanism for hydrocarbon combustion. This mechanism is used for predictions of the oxidation of butadiene, the primary HTPB pyrolysis product. A Computational Singular Perturbation (CSP) based simplification strategy for non-premixed combustion is proposed. The simplification algorithm is fed with the steady-solutions of classical flamelet equations, these being representative of the non-premixed nature of the combustion processes characterizing a hybrid rocket combustion chamber. The adopted flamelet steady-state solutions are obtained employing pure butadiene and gaseous oxygen as fuel and oxidizer boundary conditions, respectively, for a range of imposed values of strain rate and background pressure. Three simplified chemical mechanisms, each comprising less than 20 species, are obtained for three different pressure values, 3, 17, and 36 bar, selected in accordance with an experimental test campaign of lab-scale hybrid rocket static firings. Finally, a comprehensive strategy is shown to provide simplified mechanisms capable of reproducing the main flame features in the whole pressure range considered.

  7. Towards simplification of hydrologic modeling: Identification of dominant processes

    Science.gov (United States)

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  8. A thermodynamic approach to model the caloric properties of semicrystalline polymers

    Science.gov (United States)

    Lion, Alexander; Johlitz, Michael

    2016-05-01

    It is well known that the crystallisation and melting behaviour of semicrystalline polymers depends in a pronounced manner on the temperature history. If the polymer is in the liquid state above the melting point, and the temperature is reduced to a level below the glass transition, the final degree of crystallinity, the amount of the rigid amorphous phase and the configurational state of the mobile amorphous phase strongly depend on the cooling rate. If the temperature is increased afterwards, the extents of cold crystallisation and melting are functions of the heating rate. Since crystalline and amorphous phases exhibit different densities, the specific volume depends also on the temperature history. In this article, a thermodynamically based phenomenological approach is developed which allows for the constitutive representation of these phenomena in the time domain. The degree of crystallinity and the configuration of the amorphous phase are represented by two internal state variables whose evolution equations are formulated under consideration of the second law of thermodynamics. The model for the specific Gibbs free energy takes the chemical potentials of the different phases and the mixture entropy into account. For simplification, it is assumed that the amount of the rigid amorphous phase is proportional to the degree of crystallinity. An essential outcome of the model is an equation in closed form for the equilibrium degree of crystallinity in dependence on pressure and temperature. Numerical simulations demonstrate that the process dependences of crystallisation and melting under consideration of the glass transition are represented.

  9. Towards simplification of hydrologic modeling: identification of dominant processes

    Directory of Open Access Journals (Sweden)

    S. L. Markstrom

    2016-11-01

    Full Text Available parameter hydrologic model, has been applied to the conterminous US (CONUS. Parameter sensitivity analysis was used to identify: (1 the sensitive input parameters and (2 particular model output variables that could be associated with the dominant hydrologic process(es. Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff and model performance statistic (mean, coefficient of variation, and autoregressive lag 1. Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1 the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2 the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3 different processes require different numbers of parameters for simulation, and (4 some sensitive parameters influence only one hydrologic process, while others may influence many.

  10. The complexities of HIPAA and administration simplification.

    Science.gov (United States)

    Mozlin, R

    2000-11-01

    The Health Insurance Portability and Accessibility Act (HIPAA) was signed into law in 1996. Although focused on information technology issues, HIPAA will ultimately impact day-to-day operations at multiple levels within any clinical setting. Optometrists must begin to familiarize themselves with HIPAA in order to prepare themselves to practice in a technology-enriched environment. Title II of HIPAA, entitled "Administration Simplification," is intended to reduce the costs and administrative burden of healthcare by standardizing the electronic transmission of administrative and financial transactions. The Department of Health and Human Services is expected to publish the final rules and regulations that will govern HIPAA's implementation this year. The rules and regulations will cover three key aspects of healthcare delivery: electronic data interchange (EDI), security and privacy. EDI will standardize the format for healthcare transactions. Health plans must accept and respond to all transactions in the EDI format. Security refers to policies and procedures that protect the accuracy and integrity of information and limit access. Privacy focuses on how the information is used and disclosure of identifiable health information. Security and privacy regulations apply to all information that is maintained and transmitted in a digital format and require administrative, physical, and technical safeguards. HIPAA will force the healthcare industry to adopt an e-commerce paradigm and provide opportunities to improve patient care processes. Optometrists should take advantage of the opportunity to develop more efficient and profitable practices.

  11. On the simplifications for the thermal modeling of tilting-pad journal bearings under thermoelastohydrodynamic regime

    DEFF Research Database (Denmark)

    Cerda Varela, Alejandro Javier; Fillon, Michel; Santos, Ilmar

    2012-01-01

    formulation for inclusion of the heat transfer effects between oil film and pad surface. Such simplified approach becomes necessary when modeling the behavior of tilting-pad journal bearings operating on controllable lubrication regime. Three different simplified heat transfer models are tested, by comparing...... are strongly dependent on the Reynolds number for the oil flow in the bearing. For bearings operating in laminar regime, the decoupling of the oil film energy equation solving procedure, with no heat transfer terms included, with the pad heat conduction problem, where the oil film temperature is applied......The relevance of calculating accurately the oil film temperature build up when modeling tilting-pad journal bearings is well established within the literature on the subject. This work studies the feasibility of using a thermal model for the tilting-pad journal bearing which includes a simplified...

  12. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.

  13. Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.

    Science.gov (United States)

    Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J

    2016-08-01

    To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.

  14. Comparing Two Different Approaches to the Modeling of the Common Cause Failures in Fault Trees

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2002-01-01

    The potential for common cause failures in systems that perform critical functions has been recognized as very important contributor to risk associated with operation of nuclear power plants. Consequentially, modeling of common cause failures (CCF) in fault trees has become one among the essential elements in any probabilistic safety assessment (PSA). Detailed and realistic representation of CCF potential in fault tree structure is sometimes very challenging task. This is especially so in the cases where a common cause group involves more than two components. During the last ten years the difficulties associated with this kind of modeling have been overcome to some degree by development of integral PSA tools with high capabilities. Some of them allow for the definition of CCF groups and their automated expanding in the process of Boolean resolution and generation of minimal cutsets. On the other hand, in PSA models developed and run by more traditional tools, CCF-potential had to be modeled in the fault trees explicitly. With explicit CCF modeling, fault trees can grow very large, especially in the cases when they involve CCF groups with 3 or more members, which can become an issue for the management of fault trees and basic events with traditional non-integral PSA models. For these reasons various simplifications had to be made. Speaking in terms of an overall PSA model, there are also some other issues that need to be considered, such as maintainability and accessibility of the model. In this paper a comparison is made between the two approaches to CCF modeling. Analysis is based on a full-scope Level 1 PSA model for internal initiating events that had originally been developed by a traditional PSA tool and later transferred to a new-generation PSA tool with automated CCF modeling capabilities. Related aspects and issues mentioned above are discussed in the paper. (author)

  15. The limitations of mathematical modeling in high school physics education

    Science.gov (United States)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems

  16. Methodologies for Systematic Assessment of Design Simplification. Annex II

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-12-15

    Nuclear power plants are sophisticated engineered systems. To achieve a commercial nuclear power plant, its functions, systems and components need to be elaborated from design ideas to technical solutions and to the appropriate hardware over a long period of time. On the way, several design alternatives usually compete for implementation in the final plant. Engineering teams perform assessments, comparing different proposed engineering options in order to select an appropriate solution for the specific plant aimed at specific customers. This is a common process in design evolution. During such assessments, the trade-offs associated with different options are not always as simple as seen at very early design stages. Any requirement (e.g. relevant to safety, availability or competitiveness) usually has several dimensions; therefore, a change in the design aimed at producing the targeted effect (e.g. simplification of passive safety systems) as a rule produces other effects not directly related to the original idea. It means that the assessment needs to be carried out in iterations, not to bypass any meaningful feedback. The assessment then becomes a challenge for those designers who are interested in exploring innovative approaches and simplified systems. Unlike in several developed countries, so far, nuclear energy has been only marginally used in small and medium sized developing countries. One of the important reasons for this has been the lack of competitive commercial nuclear options with small and medium sized reactors (SMRs). Then, the challenge for SMR designers has been to design simpler plants in order to counterbalance the well known penalties of economy of scale. The lack of experience with SMRs in small and medium sized developing countries could be viewed as practical proof of the lack of commercial success of such reactors. Fossil fuelled gas turbine technologies offer very competitive energy options available from tens to hundreds of MW(e), with

  17. A Modeling Approach for Evaluating the Coupled Riparian Vegetation-Geomorphic Response to Altered Flow Regimes

    Science.gov (United States)

    Manners, R.; Wilcox, A. C.; Merritt, D. M.

    2016-12-01

    The ecogeomorphic response of riparian ecosystems to a change in hydrologic properties is difficult to predict because of the interactions and feedbacks among plants, water, and sediment. Most riparian models of community dynamics assume a static channel, yet geomorphic processes strongly control the establishment and survival of riparian vegetation. Using a combination of approaches that includes empirical relationships and hydrodynamic models, we model the coupled vegetation-topographic response of three cross-sections on the Yampa and Green Rivers in Dinosaur National Monument, to a shift in the flow regime. The locations represent the variable geomorphology and vegetation composition of these canyon-bound rivers. We account for the inundation and hydraulic properties of vegetation plots surveyed over three years within International River Interface Cooperative (iRIC) Fastmech, equipped with a vegetation module that accounts for flexible stems and plant reconfiguration. The presence of functional groupings of plants, or those plants that respond similarly to environmental factors such as water availability and disturbance are determined from flow response curves developed for the Yampa River. Using field measurements of vegetation morphology, distance from the channel centerline, and dominant particle size and modeled inundation properties we develop an empirical relationship between these variables and topographic change. We evaluate vegetation and channel form changes over decadal timescales, allowing for the integration of processes over time. From our analyses, we identify thresholds in the flow regime that alter the distribution of plants and reduce geomorphic complexity, predominately through side-channel and backwater infilling. Simplification of some processes (e.g., empirically-derived sedimentation) and detailed treatment of others (e.g., plant-flow interactions) allows us to model the coupled dynamics of riparian ecosystems and evaluate the impact of

  18. Elements of complexity in subsurface modeling, exemplified with three case studies

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  19. Developing a Step Wise Approach to Waste Management and Decommissioning at Sellafield Ltd

    International Nuclear Information System (INIS)

    Weston, Rebecca

    2016-01-01

    Developing a Step Wise Approach to Waste Management and Decommissioning at Sellafield Ltd: • Understand the challenge; • Understand preferred direction of travel; • Characterisation - enabling waste led decommissioning; • Engaging stakeholders; • Focus on the true drivers - alternative ILW approach; • Alternative ILW approach - simplification of waste handling process; • Manage future challenges; • Fit for purpose transport package for decommissioning wastes; • Risk based management framework

  20. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  1. Effects of model layer simplification using composite hydraulic properties

    Science.gov (United States)

    Kuniansky, Eve L.; Sepulveda, Nicasio; Elango, Lakshmanan

    2011-01-01

    Groundwater provides much of the fresh drinking water to more than 1.5 billion people in the world (Clarke et al., 1996) and in the United States more that 50 percent of citizens rely on groundwater for drinking water (Solley et al., 1998). As aquifer systems are developed for water supply, the hydrologic system is changed. Water pumped from the aquifer system initially can come from some combination of inducing more recharge, water permanently removed from storage, and decreased groundwater discharge. Once a new equilibrium is achieved, all of the pumpage must come from induced recharge and decreased discharge (Alley et al., 1999). Further development of groundwater resources may result in reductions of surface water runoff and base flows. Competing demands for groundwater resources require good management. Adequate data to characterize the aquifers and confining units of the system, like hydrologic boundaries, groundwater levels, streamflow, and groundwater pumping and climatic data for recharge estimation are to be collected in order to quantify the effects of groundwater withdrawals on wetlands, streams, and lakes. Once collected, three-dimensional (3D) groundwater flow models can be developed and calibrated and used as a tool for groundwater management. The main hydraulic parameters that comprise a regional or subregional model of an aquifer system are the hydraulic conductivity and storage properties of the aquifers and confining units (hydrogeologic units) that confine the system. Many 3D groundwater flow models used to help assess groundwater/surface-water interactions require calculating ?effective? or composite hydraulic properties of multilayered lithologic units within a hydrogeologic unit. The calculation of composite hydraulic properties stems from the need to characterize groundwater flow using coarse model layering in order to reduce simulation times while still representing the flow through the system accurately. The accuracy of flow models with

  2. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    Science.gov (United States)

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  3. A new model for the simplification of particle counting data

    Directory of Open Access Journals (Sweden)

    M. F. Fadal

    2012-06-01

    Full Text Available This paper proposes a three-parameter mathematical model to describe the particle size distribution in a water sample. The proposed model offers some conceptual advantages over two other models reported on previously, and also provides a better fit to the particle counting data obtained from 321 water samples taken over three years at a large South African drinking water supplier. Using the data from raw water samples taken from a moderately turbid, large surface impoundment, as well as samples from the same water after treatment, typical ranges of the model parameters are presented for both raw and treated water. Once calibrated, the model allows the calculation and comparison of total particle number and volumes over any randomly selected size interval of interest.

  4. Simple Planar Truss (Linear, Nonlinear and Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Frydrýšek Karel

    2016-11-01

    Full Text Available This article deals with a simple planar and statically determinate pin-connected truss. It demonstrates the processes and methods of derivations and solutions according to 1st and 2nd order theories. The article applies linear and nonlinear approaches and their simplifications via a Maclaurin series. Programming connected with the stochastic Simulation-Based Reliability Method (i.e. the direct Monte Carlo approach is used to conduct a probabilistic reliability assessment (i.e. a calculation of the probability that plastic deformation will occur in members of the truss.

  5. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  6. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  7. A Quantitative Reasoning Approach to Algebra Using Inquiry-Based Learning

    Directory of Open Access Journals (Sweden)

    Victor I. Piercey

    2017-07-01

    Full Text Available In this paper, I share a hybrid quantitative reasoning/algebra two-course sequence that challenges the common assumption that quantitative literacy and reasoning are less rigorous mathematics alternatives to algebra and illustrates that a quantitative reasoning framework can be used to teach traditional algebra. The presentation is made in two parts. In the first part, which is somewhat philosophical and theoretical, I explain my personal perspective of what I mean by “algebra” and “doing algebra.” I contend that algebra is a form of communication whose value is precision, which allows us to perform algebraic manipulations in the form of simplification and solving moves. A quantitative reasoning approach to traditional algebraic manipulations rests on intentional and purposeful use of simplification and solving moves within contextual situations. In part 2, I describe a 6-week instructional module intended for undergraduate business students that was delivered to students who had placed into beginning algebra. The perspective described in part 1 heavily informed the design of this module. The course materials, which involve the use of Excel in multiple authentic contexts, are built around the use of inquiry-based learning. Upon completion of this module, the percentage of students who successfully complete model problems in an assessment is in the same range as surveyed students in precalculus and calculus, approximately two “grade levels” ahead of their placement.

  8. An Analysis of Simplification Strategies in a Reading Textbook of Japanese as a Foreign Language

    Directory of Open Access Journals (Sweden)

    Kristina HMELJAK SANGAWA

    2016-06-01

    Full Text Available Reading is one of the bases of second language learning, and it can be most effective when the linguistic difficulty of the text matches the reader's level of language proficiency. The present paper reviews previous research on the readability and simplification of Japanese texts, and presents an analysis of a collection of simplified texts for learners of Japanese as a foreign language. The simplified texts are compared to their original versions to uncover different strategies used to make the texts more accessible to learners. The list of strategies thus obtained can serve as useful guidelines for assessing, selecting, and devising texts for learners of Japanese as a foreign language.

  9. Use of simplified models in the performance assessment of a high-level waste repository system in Japan

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mohanty, Sitakanta; Kanno, Takeshi; Tochigi, Yoshikatsu

    2005-01-01

    This paper explores simplifications to the H12 performance assessment model to enhance performance in Monte Carlo analyses. It is shown that similar reference case results to those of the H12 model can be derived by describing the buffer material surrounding a waste package as a planar body. Other possible simplifications to the performance assessment model in areas related to the stratification of the host rock transmissivity domain and solubility constraints in the buffer material are explored. (author)

  10. Reduction and technical simplification of testing protocol for walking based on repeatability analyses: An Interreg IVa pilot study

    Directory of Open Access Journals (Sweden)

    Nejc Sarabon

    2010-12-01

    Full Text Available The aim of this study was to define the most appropriate gait measurement protocols to be used in our future studies in the Mobility in Ageing project. A group of young healthy volunteers took part in the study. Each subject carried out a 10-metre walking test at five different speeds (preferred, very slow, very fast, slow, and fast. Each walking speed was repeated three times, making a total of 15 trials which were carried out in a random order. Each trial was simultaneously analysed by three observers using three different technical approaches: a stop watch, photo cells and electronic kinematic dress. In analysing the repeatability of the trials, the results showed that of the five self-selected walking speeds, three of them (preferred, very fast, and very slow had a significantly higher repeatability of the average walking velocity, step length and cadence than the other two speeds. Additionally, the data showed that one of the three technical methods for gait assessment has better metric characteristics than the other two. In conclusion, based on repeatability, technical and organizational simplification, this study helped us to successfully define a simple and reliable walking test to be used in the main study of the project.

  11. Ecosystem models are by definition simplifications of the real ...

    African Journals Online (AJOL)

    spamer

    to calculate changes in total phytoplankton vegetative biomass with time ... into account when modelling phytoplankton population dynamics. ... Then, the means whereby the magnitude of ..... There was increased heat input and slight stratification from mid to ... conditions must be optimal and the water should be extremely ...

  12. Study of the behaviour of trace elements in estuaries: experimental approaches and modeling

    International Nuclear Information System (INIS)

    Dange, Catherine

    2002-01-01

    the biogeochemistry of Cd, Co and Cs in the estuarine environment and the knowledge obtained on the field. Experiments performed both in laboratory and in situ were necessary to check the validity of the assumptions of the model and to evaluate model parameters, which cannot be measured directly like to the sorption properties of natural particles. Radiotracers ("1"0"9Cd, "5"7Co,"1"3"4Cs) were used to determine physico-chemical key processes and environmental variables that control the speciation and the fate of Cd, Co and Cs. This approach, based on the use of spike with various radionuclides, allowed us to evaluate the affinity constants of particles to the four estuaries for the studied metals (global intrinsic complexation and exchange constants) and also the exchangeable particulate fraction estimated from the comparison of measured natural metals coefficients of distribution and coefficient of distribution of their radioactive equivalents. Other parameters, which are necessary to build the model (specific surface area, concentration of active surface sites, mean intrinsic acid-base constants,...), were independently estimated by various experimental approaches, applied in laboratory to particle samples taken throughout estuaries (electrochemical measurements, nitrogen adsorption using the BET method,...). The results of the validation indicate that in spite of its simplifications, the model reproduces in a satisfactory way the dissolved/particulate distributions measured for Cd, Co and Cs. With a predictive aim, this type of model must be coupled with a hydro-sedimentary transport model. (author)

  13. A stochastic approach to long term operation planning of hydrothermal systems; Uma abordagem estocastica para o planejamento a longo prazo da operacao de sistemas hidrotermicos

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Marinho G. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Inst. de Ciencias Matematicas; Soares, Secundino; Cruz Junior, Gelson da; Vinhal, Cassio D.N. [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Eletrica

    1996-01-01

    This paper is concerned with long term operation of hydro-thermal power systems. The problem is approached by a deterministic optimization technique coupled to an inflow forecasting model in open-loop feedback framework in monthly basis. The paper aims to compare the solution obtained by this approach and Stochastic Dynamic Programming (SDP), which has been accepted for over than two decades as the better solution to deal with inflow uncertainty in long term planning. The comparison was carried out in systems with a single plant, simulating the operation throughout a period of five years under the historical inflow conditions and evaluating the cost of the complementary thermal generation. Results show that the proposed approach can handle uncertainty as effectively as SDP. Furthermore, it does not require modeling simplification, such as composite reservoirs, to deal with multi hydro plant systems. 10 refs., 1 tab.

  14. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  15. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  16. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  17. Physical models for high burnup fuel

    International Nuclear Information System (INIS)

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  18. Content of nitrates in potato tubers depending on the organic matter, soil fertilizer, cultivation simplifications applied and storage

    Directory of Open Access Journals (Sweden)

    Jaroslaw Pobereżny

    2015-03-01

    Full Text Available Nitrates naturally occur in plant-based food. Nitrates content in consumable plant organs is small and should not raise concern provided that the recommended fertilization and harvest terms of the original plants are observed. The aim was to determine the effect of the application of various organic matter of soil fertilizer and simplifications in growing potato (Solanum tuberosum L. on the content of nitrates in the tubers of mid-early cultivar 'Satina' after harvest and after 6-mo of storage. Introducing cultivation simplification involves limiting mineral fertilization by 50% as well as chemical protection limitation. The soil fertilizer was used: 0.6 (autumn, 0.3 (spring, and 0.3 L ha-1 (during the vegetation period. The content of nitrates, was determined with the use of the ion-selective method (multi-purpose computer device CX-721, Elmetron. The lowest amount of nitrates was recorded in the tubers from the plots without the application of organic matter with a 50% rate of mineral fertilization with soil fertilizer (120.5 mg kg-1 FW. The use of varied organic matter resulted in a significant increase in the content of nitrates in tubers and the lowest effect on their accumulation was reported for straw. The soil fertilizer used significantly decreased the content of nitrates in tubers by 15% for 100% NPK and 10.4% for 50% NPK. After 6-mo storage, irrespective of the experiment factors, the content of nitrates decreased in the fertilization experiment by 26% and in the experiment with a limited protection - by 19.9%.

  19. Probabilistic models for reactive behaviour in heterogeneous condensed phase media

    Science.gov (United States)

    Baer, M. R.; Gartling, D. K.; DesJardin, P. E.

    2012-02-01

    This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.

  20. Injury Based on Its Study in Experimental Models

    Directory of Open Access Journals (Sweden)

    M. Mendes-Braz

    2012-01-01

    Full Text Available The present review focuses on the numerous experimental models used to study the complexity of hepatic ischemia/reperfusion (I/R injury. Although experimental models of hepatic I/R injury represent a compromise between the clinical reality and experimental simplification, the clinical transfer of experimental results is problematic because of anatomical and physiological differences and the inevitable simplification of experimental work. In this review, the strengths and limitations of the various models of hepatic I/R are discussed. Several strategies to protect the liver from I/R injury have been developed in animal models and, some of these, might find their way into clinical practice. We also attempt to highlight the fact that the mechanisms responsible for hepatic I/R injury depend on the experimental model used, and therefore the therapeutic strategies also differ according to the model used. Thus, the choice of model must therefore be adapted to the clinical question being answered.

  1. Shock - A reappraisal: The holistic approach

    Directory of Open Access Journals (Sweden)

    Fabrizio Giuseppe Bonanno

    2012-01-01

    Full Text Available Shock as reaction to life-threatening condition needs to be reclassified in a timely and more scientific synopsis. It is not possible or beneficial any longer to avoid a holistic approach in critical illness. Semantics of critical illness has often been unfriendly in the literature and a simplification with the elimination of conceptual pleonasms and misnomers under the exclusive light of physiology and physiopathology would be advantageous. Speaking one language to describe the same phenomenon worldwide is essential for understanding; moreover, it increases focus on characterization and significance of the phenomena.

  2. Modelling of radionuclide transport in forests: Review and future perspectives

    International Nuclear Information System (INIS)

    Shaw, G.; Schell, W.; Linkov, I.

    1997-01-01

    Ecological modeling is a powerful tool which can be used to synthesize information on the dynamic processes which occur in ecosystems. Models of radionuclide transport in forests were first constructed in the mid-1960's, when the consequences of global fallout from nuclear weapons tests and waste disposal in the environment were of great concern. Such models were developed based on site-specific experimental data and were designed to address local needs. These models had a limited applicability in evaluating distinct ecosystems and deposition scenarios. Given the scarcity of information, the same experimental data sets were often used both for model calibration and validation, an approach which clearly constitutes a methodological error. Even though the carry modeling attempts were far from being faultless, they established a useful conceptual approach in that they tried to capture general processes in ecosystems and thus had a holistic nature. Later, radioecological modeling attempted to reveal ecosystem properties by separating the component parts from the whole system, as an approach to simplification. This method worked well for radionuclide transport in agricultural ecosystems, in which the biogeochemistry of radionuclide cycling is relatively well understood and can be influenced by fertilization. Several models have been successfully developed and applied to human dose evaluation and emergency response to contaminating events in agricultural lands

  3. R-LODs: fast LOD-based ray tracing of massive models

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Sung-Eui; Lauterbach, Christian; Manocha, Dinesh

    2006-08-25

    We present a novel LOD (level-of-detail) algorithm to accelerate ray tracing of massive models. Our approach computes drastic simplifications of the model and the LODs are well integrated with the kd-tree data structure. We introduce a simple and efficient LOD metric to bound the error for primary and secondary rays. The LOD representation has small runtime overhead and our algorithm can be combined with ray coherence techniques and cache-coherent layouts to improve the performance. In practice, the use of LODs can alleviate aliasing artifacts and improve memory coherence. We implement our algorithm on both 32bit and 64bit machines and able to achieve up to 2.20 times improvement in frame rate of rendering models consisting of tens or hundreds of millions of triangles with little loss in image quality.

  4. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge

    2016-01-01

    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  5. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge

    2017-01-01

    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  6. Use of process indices for simplification of the description of vapor deposition systems

    International Nuclear Information System (INIS)

    Kajikawa, Yuya; Noda, Suguru; Komiyama, Hiroshi

    2004-01-01

    Vapor deposition is a complex process, including gas-phase, surface, and solid-phase phenomena. Because of the complexity of chemical and physical processes occurring in vapor deposition processes, it is difficult to form a comprehensive, fundamental understanding of vapor deposition and to control such systems for obtaining desirable structures and performance. To overcome this difficulty, we present a method for simplifying the complex description of such systems. One simplification method is to separate complex systems into multiple elements, and determine which of these are important elements. We call this method abridgement. The abridgement method retains only the dominant processes in a description of the system, and discards the others. Abridgement can be achieved by using process indices to evaluate the relative importance of the elementary processes. We describe the formulation and use of these process indices through examples of the growth of continuous films, initial deposition processes, and the formation of the preferred orientation of polycrystalline films. In this paper, we propose a method for representing complex vapor deposition processes as a set of simpler processes

  7. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  8. System learning approach to assess sustainability and ...

    Science.gov (United States)

    This paper presents a methodology that combines the power of an Artificial Neural Network and Information Theory to forecast variables describing the condition of a regional system. The novelty and strength of this approach is in the application of Fisher information, a key method in Information Theory, to preserve trends in the historical data and prevent over fitting projections. The methodology was applied to demographic, environmental, food and energy consumption, and agricultural production in the San Luis Basin regional system in Colorado, U.S.A. These variables are important for tracking conditions in human and natural systems. However, available data are often so far out of date that they limit the ability to manage these systems. Results indicate that the approaches developed provide viable tools for forecasting outcomes with the aim of assisting management toward sustainable trends. This methodology is also applicable for modeling different scenarios in other dynamic systems. Indicators are indispensable for tracking conditions in human and natural systems, however, available data is sometimes far out of date and limit the ability to gauge system status. Techniques like regression and simulation are not sufficient because system characteristics have to be modeled ensuring over simplification of complex dynamics. This work presents a methodology combining the power of an Artificial Neural Network and Information Theory to capture patterns in a real dyna

  9. Angular overlap model in actinides

    International Nuclear Information System (INIS)

    Gajek, Z.; Mulak, J.

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed

  10. Angular overlap model in actinides

    Energy Technology Data Exchange (ETDEWEB)

    Gajek, Z.; Mulak, J. (Polska Akademia Nauk, Wroclaw (PL). Inst. Niskich Temperatur i Badan Strukturalnych)

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed.

  11. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  12. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  13. Advertising in the Sznajd Marketing Model

    Science.gov (United States)

    Schulze, Christian

    The traditional Sznajd model, as well as its Ochrombel simplification for opinion spreading, is applied to marketing with the help of advertising. The larger the lattice, the smaller the amount of advertising is needed to convince the whole market.

  14. Advertising effects in Sznajd marketing model

    OpenAIRE

    Christian Schulze

    2002-01-01

    The traditional Sznajd model, as well as its Ochrombel simplification for opinion spreading, are applied to marketing with the help of advertising. The larger the lattice is the smaller is the amount of advertising needed to convince the whole market

  15. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  16. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  17. Criticism of technology in a state of antagonism. Against simplification, prejudice and ideologies. Technikkritik im Widerstreit. Gegen Vereinfachungen, Vorurteile und Ideologien

    Energy Technology Data Exchange (ETDEWEB)

    Detzer, K A

    1987-01-01

    The book presents a compilation of public lectures, review articles, and statements of opinion from public debates that all refer to topical, socio-political problems in connection with technology and industry, and is intended to reveal structural interdependencies in order to contradict the frequently observed simplifications, prejudices, or ideologies, and in order to point out true arguments that can be used in a fair discussion based on pluralistic principles, about the decisions to be taken. Technology and its impacts on industry, politics, education and ethics. (HSCH).

  18. Gaussian-Based Smooth Dielectric Function: A Surface-Free Approach for Modeling Macromolecular Binding in Solvents

    Directory of Open Access Journals (Sweden)

    Arghya Chakravorty

    2018-03-01

    Full Text Available Conventional modeling techniques to model macromolecular solvation and its effect on binding in the framework of Poisson-Boltzmann based implicit solvent models make use of a geometrically defined surface to depict the separation of macromolecular interior (low dielectric constant from the solvent phase (high dielectric constant. Though this simplification saves time and computational resources without significantly compromising the accuracy of free energy calculations, it bypasses some of the key physio-chemical properties of the solute-solvent interface, e.g., the altered flexibility of water molecules and that of side chains at the interface, which results in dielectric properties different from both bulk water and macromolecular interior, respectively. Here we present a Gaussian-based smooth dielectric model, an inhomogeneous dielectric distribution model that mimics the effect of macromolecular flexibility and captures the altered properties of surface bound water molecules. Thus, the model delivers a smooth transition of dielectric properties from the macromolecular interior to the solvent phase, eliminating any unphysical surface separating the two phases. Using various examples of macromolecular binding, we demonstrate its utility and illustrate the comparison with the conventional 2-dielectric model. We also showcase some additional abilities of this model, viz. to account for the effect of electrolytes in the solution and to render the distribution profile of water across a lipid membrane.

  19. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    Science.gov (United States)

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  20. Horizontal bioreactor for ethanol production by immobilized cells. Pt. 3. Reactor modeling and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Woehrer, W

    1989-04-05

    A mathematical model which describes ethanol formation in a horizontal tank reactor containing Saccharomyces cerevisiae immobilized in small beads of calcium alignate has been developed. The design equations combine flow dynamics of the reactor as well as product formation kinetics. The model was verified for 11 continuous experiments, where dilution rate, feed glucose concentration and bead volume fraction were varied. The model predicts effluent ethanol concentration and CO/sub 2/ production rate within the experimental error. A simplification of the model is possible, when the feed glucose concentration does not exceed 150 kg/m/sup 3/. The simplification results in an analytical solution of the design equation and hence can easily be applied for design purposes as well as for optimization studies.

  1. Maraviroc/raltegravir simplification strategy following 6 months of quadruple therapy with tenofovir/emtricitabine/maraviroc/raltegravir in treatment-naive HIV patients.

    Science.gov (United States)

    Pradat, Pierre; Durant, Jacques; Brochier, Corinne; Trabaud, Mary-Anne; Cottalorda-Dufayard, Jacqueline; Izopet, Jacques; Raffi, François; Lucht, Frédéric; Gagnieu, Marie-Claude; Gatey, Caroline; Jacomet, Christine; Vassallo, Matteo; Dellamonica, Pierre; Cotte, Laurent

    2016-11-01

    We assessed the virological efficacy of a 6 month maraviroc/raltegravir simplification strategy following 6 months of quadruple therapy combining tenofovir disoproxil fumarate/emtricitabine with maraviroc/raltegravir. HIV-1-infected naive patients were enrolled in an open label, single-arm, Phase 2 trial. All patients received maraviroc 300 mg twice daily, raltegravir 400 mg twice daily and tenofovir/emtricitabine for 24 weeks. Patients with stable HIV-RNA HIV-RNA HIV-RNA was 4.3 log copies/mL. All patients had CCR5-tropic viruses by genotropism and phenotropism assays. All but one patient had an HIV-RNA < 50 copies/mL at W24 and entered the simplification phase. Virological success was maintained at W48 in 88% (90% CI 79%-97%) of patients. N155H mutation was detected at failure in one patient. No tropism switch was observed. Raltegravir and maraviroc plasma exposure were satisfactory in 92% and 79% of 41 samples from 21 patients. Five severe adverse events (SAEs) were observed up to W48; none was related to the study drugs. Four patients presented grade 3 AEs; none was related to the study. No grade 4 AE was observed. No patient died. Maraviroc/raltegravir maintenance therapy following a 6 month induction phase with maraviroc/raltegravir/tenofovir/emtricitabine was well tolerated and maintained virological efficacy in these carefully selected patients. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. String model of black hole microstates

    International Nuclear Information System (INIS)

    Larsen, F.

    1997-01-01

    The statistical mechanics of black holes arbitrarily far from extremality is modeled by a gas of weakly interacting strings. As an effective low-energy description of black holes the string model provides several highly nontrivial consistency checks and predictions. Speculations on a fundamental origin of the model suggest surprising simplifications in nonperturbative string theory, even in the absence of supersymmetry. copyright 1997 The American Physical Society

  3. Inferring catchment precipitation by doing hydrology backward : A test in 24 small and mesoscale catchments in Luxembourg

    NARCIS (Netherlands)

    Krier, R.; Matgen, P.; Goergen, K.; Pfister, L.; Hoffmann, L.; Kirchner, J.W.; Uhlenbrook, S.; Savenije, H.H.G.

    2012-01-01

    The complexity of hydrological systems and the necessary simplification of models describing these systems remain major challenges in hydrological modeling. Kirchner's (2009) approach of inferring rainfall and evaporation from discharge fluctuations by “doing hydrology backward” is based on the

  4. Mutual Trust and Cross-Border Enforcement of Judgments in Civil Matters in the EU: Does the Step-by-Step Approach Work?

    NARCIS (Netherlands)

    Zilinsky, M.

    2017-01-01

    Mutual trust is one of the cornerstones of cooperation in the field of European Union private international law. Based on this principle the rules on the cross-border recognition and enforcement of judgments in the European Union are still subject to simplification. The step-by-step approach of the

  5. Two-fluid model stability, simulation and chaos

    CERN Document Server

    Bertodano, Martín López de; Clausse, Alejandro; Ransom, Victor H

    2017-01-01

    This book addresses the linear and nonlinear two-phase stability of the one-dimensional Two-Fluid Model (TFM) material waves and the numerical methods used to solve it. The TFM fluid dynamic stability is a problem that remains open since its inception more than forty years ago. The difficulty is formidable because it involves the combined challenges of two-phase topological structure and turbulence, both nonlinear phenomena. The one dimensional approach permits the separation of the former from the latter. The authors first analyze the kinematic and Kelvin-Helmholtz instabilities with the simplified one-dimensional Fixed-Flux Model (FFM). They then analyze the density wave instability with the well-known Drift-Flux Model. They demonstrate that the Fixed-Flux and Drift-Flux assumptions are two complementary TFM simplifications that address two-phase local and global linear instabilities separately. Furthermore, they demonstrate with a well-posed FFM and a DFM two cases of nonlinear two-phase behavior that are ...

  6. Numerical Coupling of the Particulate Phase to the Plasma Phase in Modeling of Multi-Arc Plasma Spraying

    International Nuclear Information System (INIS)

    Bobzin, K.; Öte, M.

    2017-01-01

    Inherent to Euler-Lagrange formulation, which can be used in order to describe the particle behavior in plasma spraying, particle in-flight characteristics are determined by calculating the impulse, heat and mass transfer between the plasma jet and individual powder particles. Based on the assumption that the influence of the particulate phase on the fluid phase is insignificant, impulse, heat and mass transfer from particles to the plasma jet can be neglected using the so-called numerical approach of “one-way coupling”. On the other hand, so-called “two-way coupling” considers the two-sided transfer between both phases. The former is a common simplification used in the literature to describe the plasma-particle interaction in thermal spraying. This study focuses on the significance of this simplification on the calculated results and shows that the use of this simplification leads to significant errors in calculated plasma and particle in-flight characteristics in three-cathode plasma spraying process. (paper)

  7. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  8. Evaluating the effects of modeling errors for isolated finite three-dimensional targets

    Science.gov (United States)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui

    2017-10-01

    Optical three-dimensional (3-D) nanostructure metrology utilizes a model-based metrology approach to determine critical dimensions (CDs) that are well below the inspection wavelength. Our project at the National Institute of Standards and Technology is evaluating how to attain key CD and shape parameters from engineered in-die capable metrology targets. More specifically, the quantities of interest are determined by varying the input parameters for a physical model until the simulations agree with the actual measurements within acceptable error bounds. As in most applications, establishing a reasonable balance between model accuracy and time efficiency is a complicated task. A well-established simplification is to model the intrinsically finite 3-D nanostructures as either periodic or infinite in one direction, reducing the computationally expensive 3-D simulations to usually less complex two-dimensional (2-D) problems. Systematic errors caused by this simplified model can directly influence the fitting of the model to the measurement data and are expected to become more apparent with decreasing lengths of the structures. We identify these effects using selected simulation results and present experimental setups, e.g., illumination numerical apertures and focal ranges, that can increase the validity of the 2-D approach.

  9. Simplified modeling of liquid-liquid heat exchangers for use in control systems

    International Nuclear Information System (INIS)

    Laszczyk, Piotr

    2017-01-01

    For last decades various models of heat exchange processes have been developed to capture their specific dynamic nature. These models have different degrees of complexity depending on modeling assumptions and simplifications. Complexity of mathematical model can be very critical when the model is to be a basis for deriving the control law because it directly affects the complexity of mathematical transformations and complexity of final control algorithm. In this paper, the simplified cross convection model for wide class of heat exchangers is suggested. Apart from very few reports so far, the properties of this modeling approach have never been investigated in detail. The concept for this model is derived from the fundamental principle of energy conservation and combined with a simple dynamical approximation in the form of ordinary differential equations. Within this framework, the simplified tuning procedure of the proposed model is suggested and verified for plate and spiral tube heat exchangers based on experimental data. The dynamical properties and stability of the suggested model are addressed and sensitivity analysis is also presented. It is shown that such a modeling approach preserves high modeling accuracy at very low numerical complexity. The validation results show that the suggested modeling and tuning method is useful for practical applications.

  10. A model to predict element redistribution in unsaturated soil: Its simplification and validation

    International Nuclear Information System (INIS)

    Sheppard, M.I.; Stephens, M.E.; Davis, P.A.; Wojciechowski, L.

    1991-01-01

    A research model has been developed to predict the long-term fate of contaminants entering unsaturated soil at the surface through irrigation or atmospheric deposition, and/or at the water table through groundwater. The model, called SCEMR1 (Soil Chemical Exchange and Migration of Radionuclides, Version 1), uses Darcy's law to model water movement, and the soil solid/liquid partition coefficient, K d , to model chemical exchange. SCEMR1 has been validated extensively on controlled field experiments with several soils, aeration statuses and the effects of plants. These validation results show that the model is robust and performs well. Sensitivity analyses identified soil K d , annual effective precipitation, soil type and soil depth to be the four most important model parameters. SCEMR1 consumes too much computer time for incorporation into a probabilistic assessment code. Therefore, we have used SCEMR1 output to derive a simple assessment model. The assessment model reflects the complexity of its parent code, and provides a more realistic description of containment transport in soils than would a compartment model. Comparison of the performance of the SCEMR1 research model, the simple SCEMR1 assessment model and the TERRA compartment model on a four-year soil-core experiment shows that the SCEMR1 assessment model generally provides conservative soil concentrations. (15 refs., 3 figs.)

  11. PEM fuel cell model suitable for energy optimization purposes

    International Nuclear Information System (INIS)

    Caux, S.; Hankache, W.; Fadel, M.; Hissel, D.

    2010-01-01

    Many fuel cell stack models or fuel cell system models exist. A model must be built with a main objective, sometimes for accurate electro-chemical behavior description, sometimes for optimization procedure at a system level. In this paper, based on the fundamental reactions present in a fuel cell stack, an accurate model and identification procedure is presented for future energy management in a Hybrid Electrical Vehicle (HEV). The proposed approach extracts all important state variables in such a system and based on the control of the fuel cell's gas flows and temperature, simplification arises to a simple electrical model. Assumptions verified due to the control of the stack allow simplifying the relationships within keeping accuracy in the description of a global fuel cell stack behavior from current demand to voltage. Modeled voltage and current dynamic behaviors are compared with actual measurements. The obtained accuracy is sufficient and less time-consuming (versus other previously published system-oriented models) leading to a suitable model for optimization iterative off-line algorithms.

  12. PEM fuel cell model suitable for energy optimization purposes

    Energy Technology Data Exchange (ETDEWEB)

    Caux, S.; Hankache, W.; Fadel, M. [LAPLACE/CODIASE: UMR CNRS 5213, Universite de Toulouse - INPT, UPS, - ENSEEIHT: 2 rue Camichel BP7122, 31071 Toulouse (France); CNRS, LAPLACE, F-31071 Toulouse (France); Hissel, D. [FEMTO-ST ENISYS/FCLAB, UMR CNRS 6174, University of Franche-Comte, Rue Thierry Mieg, 90010 Belfort (France)

    2010-02-15

    Many fuel cell stack models or fuel cell system models exist. A model must be built with a main objective, sometimes for accurate electro-chemical behavior description, sometimes for optimization procedure at a system level. In this paper, based on the fundamental reactions present in a fuel cell stack, an accurate model and identification procedure is presented for future energy management in a Hybrid Electrical Vehicle (HEV). The proposed approach extracts all important state variables in such a system and based on the control of the fuel cell's gas flows and temperature, simplification arises to a simple electrical model. Assumptions verified due to the control of the stack allow simplifying the relationships within keeping accuracy in the description of a global fuel cell stack behavior from current demand to voltage. Modeled voltage and current dynamic behaviors are compared with actual measurements. The obtained accuracy is sufficient and less time-consuming (versus other previously published system-oriented models) leading to a suitable model for optimization iterative off-line algorithms. (author)

  13. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  14. Integrating Tax Preparation with FAFSA Completion: Three Case Models

    Science.gov (United States)

    Daun-Barnett, Nathan; Mabry, Beth

    2012-01-01

    This research compares three different models implemented in four cities. The models integrated free tax-preparation services to assist low-income families with their completion of the Free Application for Federal Student Aid (FAFSA). There has been an increased focus on simplifying the FAFSA process. However, simplification is not the only…

  15. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  16. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  17. Characterizing and modeling the pressure- and rate-dependent elastic-plastic-damage behaviors of polypropylene-based polymers

    KAUST Repository

    Pulungan, Ditho Ardiansyah; Yudhanto, Arief; Goutham, Shiva; Lubineau, Gilles; Yaldiz, Recep; Schijve, Warden

    2018-01-01

    Polymers in general exhibit pressure- and rate-dependent behavior. Modeling such behavior requires extensive, costly and time-consuming experimental work. Common simplifications may lead to severe inaccuracy when using the model for predicting

  18. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  19. Computer modelling for better diagnosis and therapy of patients by cardiac resynchronisation therapy

    NARCIS (Netherlands)

    Pluijmert, Marieke; Lumens, Joost; Potse, Mark; Delhaas, Tammo; Auricchio, Angelo; Prinzen, Frits W

    2015-01-01

    Mathematical or computer models have become increasingly popular in biomedical science. Although they are a simplification of reality, computer models are able to link a multitude of processes to each other. In the fields of cardiac physiology and cardiology, models can be used to describe the

  20. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  1. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  2. Thermophysical modeling for high-resolution digital terrain models

    Science.gov (United States)

    Pelivan, I.

    2018-04-01

    A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.

  3. Microscopic modelling of doped manganites

    International Nuclear Information System (INIS)

    Weisse, Alexander; Fehske, Holger

    2004-01-01

    Colossal magneto-resistance manganites are characterized by a complex interplay of charge, spin, orbital and lattice degrees of freedom. Formulating microscopic models for these compounds aims at meeting two conflicting objectives: sufficient simplification without excessive restrictions on the phase space. We give a detailed introduction to the electronic structure of manganites and derive a microscopic model for their low-energy physics. Focusing on short-range electron-lattice and spin-orbital correlations we supplement the modelling with numerical simulations

  4. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  5. PETROBRAS P-55: a new approach for the topsides design

    Energy Technology Data Exchange (ETDEWEB)

    Bronneberg, Jos; Maas, Hans [SBM Offshore, Schiedam (Netherlands); GustoMSC, Schiedam (Netherlands); Cyranka, Carlos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)

    2008-07-01

    In July 2007 PETROBRAS awarded Gusto BV the Front End Engineering and Design for the re-design of the Topsides for PETROBRAS-55 Production Platform ('P-55') seeking simplification, reduction of costs and schedule. The PETROBRAS-55 will add an impressive 180 000 barrels oil processing capacity to the Roncador Field in the Campos Basin. Despite its large capacity the functions to be fulfilled on board can be performed by methods well known in the oil industry. Oil dehydration by gravity and electrostatic separators, oil stabilization through heating and depressurization, carbon dioxide removal out of gas through adsorption by amine, gas dehydration through absorption by glycol, produced water treatment through hydrocyclones and gas flotation, etc. Offshore operations in general, due to their independent (self reliance), remote location, limited manning and expertise, etc., need to be as rugged, robust, lean and simple in its set-up and operation as possible. The (front end) engineering plays an essential role in the Endeavour to obtain these characteristics on an offshore unit. The initial configuration is perceived as one of the key elements that will determine to a great extent the feasibility of the much sought simplicity. Since the basic design does not cover the complete project the approach used to meet the objectives was to deliver a safe, operable and lean design, it focused on simplifications. The mainly technical approach is described below. (author)

  6. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  7. A Real Options Approach to Nuclear Waste Disposal in Sweden

    International Nuclear Information System (INIS)

    Soederkvist, Jonas; Joensson, Kristian

    2004-04-01

    This report is concerned with an investigation of how the real options approach can be useful for managerial decisions regarding the phase-out of nuclear power generation in Sweden. The problem of interest is the optimal time-schedule for phase-out activities, where the optimal time-schedule is defined in purely economical terms. The approach taken is actual construction and application of three real options models, which capture different aspects of managerial decisions. The first model concerns when investments in deep disposal facilities should optimally be made. Although the model is a rough simplification of reality, the result is clear. It is economically advantageous to postpone deep disposal forever. The second model focuses on how the uncertainty of future costs relates to managerial investment decisions. Construction of this model required some creativity, as the nuclear phase-out turns out to be quite a special project. The result from the second model is that there can be a value associated with deferral of investments due to the uncertainty of future costs, but the result is less clear-cut compared to the first model. In the third model, we extend an approach suggested by Louberge, Villeneuve and Chesney. The risk of a nuclear accident is introduced through this model and we develop its application to investigate the Swedish phase-out in particular, which implies that waste continuously disposed. In the third model, focus is shifted from investment timing to implementation timing. The results from the third model are merely qualitative, as it is considered beyond the scope of this work to quantitatively determine all relevant inputs. It is concluded that the phase-out of nuclear power generation in Sweden is not just another area of application for standard real options techniques. A main reason is that although there are a lot of uncertain issues regarding the phase-out, those uncertainties do not leave a lot of room for managerial flexibility if

  8. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  9. Modelling and analysis of solar cell efficiency distributions

    Science.gov (United States)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  10. MODELS AND METHODS FOR LOGISTICS HUB LOCATION: A REVIEW TOWARDS TRANSPORTATION NETWORKS DESIGN

    Directory of Open Access Journals (Sweden)

    Carolina Luisa dos Santos Vieira

    Full Text Available ABSTRACT Logistics hubs affect the distribution patterns in transportation networks since they are flow-concentrating structures. Indeed, the efficient moving of goods throughout supply chains depends on the design of such networks. This paper presents a literature review on the logistics hub location problem, providing an outline of modeling approaches, solving techniques, and their applicability to such context. Two categories of models were identified. While multi-criteria models may seem best suited to find optimal locations, they do not allow an assessment of the impact of new hubs on goods flow and on the transportation network. On the other hand, single-criterion models, which provide location and flow allocation information, adopt network simplifications that hinder an accurate representation of the relationshipbetween origins, destinations, and hubs. In view of these limitations we propose future research directions for addressing real challenges of logistics hubs location regarding transportation networks design.

  11. Modeling of a new 2D Acceleration Sensor Array using SystemC-AMS

    International Nuclear Information System (INIS)

    Markert, Erik; Dienel, Marco; Herrmann, Goeran; Mueller, Dietmar; Heinkel, Ulrich

    2006-01-01

    This paper presents an approach for modeling and simulation of a new 2D acceleration sensor array using SystemC-AMS. The sensor array consists of six single acceleration sensors with different detection axes. These single sensors comprise of four capacitive segments and one mass segment, aligned in a semicircle. The redundant sensor information is used for offset correction. Modeling of the single sensors is achieved using sensor structure simplification into 11 points and analytic equations for capacity changes, currents and torques. This model was expanded by a PWM feedback circuit to keep the sensor displacement in a linear region. In this paper the single sensor model is duplicated considering different positions of the seismic mass resulting in different detection axes for the single sensors. The measured accelerations of the sensors are merged with different weights depending on the orientation. This also reduces calculation effort

  12. Recomputing Causality Assignments on Lumped Process Models When Adding New Simplification Assumptions

    Directory of Open Access Journals (Sweden)

    Antonio Belmonte

    2018-04-01

    Full Text Available This paper presents a new algorithm for the resolution of over-constrained lumped process systems, where partial differential equations of a continuous time and space model of the system are reduced into ordinary differential equations with a finite number of parameters and where the model equations outnumber the unknown model variables. Our proposal is aimed at the study and improvement of the algorithm proposed by Hangos-Szerkenyi-Tuza. This new algorithm improves the computational cost and solves some of the internal problems of the aforementioned algorithm in its original formulation. The proposed algorithm is based on parameter relaxation that can be modified easily. It retains the necessary information of the lumped process system to reduce the time cost after introducing changes during the system formulation. It also allows adjustment of the system formulations that change its differential index between simulations.

  13. Simplifications of Einstein supergravity

    International Nuclear Information System (INIS)

    Ferrara, S.; van Nieuwenhuizen, P.

    1979-01-01

    Using a new symmetry of the Einstein supergravity action and defining a new spin connection, the axial-vector auxiliary field cancels in the gauge action and in the gauge algebra. This explains why in some models a first-order formalism with minimal coupling of the spin connection and tensor calculus agree, while in other models only the tensor calculus gives the correct result but torsion does not

  14. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  15. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  16. Multiscale modelling for tokamak pedestals

    Science.gov (United States)

    Abel, I. G.

    2018-04-01

    Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.

  17. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  18. A novel approach to the experimental study on methane/steam reforming kinetics using the Orthogonal Least Squares method

    Science.gov (United States)

    Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.

    2014-09-01

    For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.

  19. Mathematical modeling in biology: A critical assessment

    Energy Technology Data Exchange (ETDEWEB)

    Buiatti, M. [Florence, Univ. (Italy). Dipt. di Biologia Animale e Genetica

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented `lead forward` of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. `Autistic`, monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve `selfish` problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally `top.down` (deductive) and `bottom up` (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples.

  20. Mathematical modeling in biology: A critical assessment

    International Nuclear Information System (INIS)

    Buiatti, M.

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented 'lead forward' of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. 'Autistic', monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve 'selfish' problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally 'top.down' (deductive) and 'bottom up' (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples

  1. Computerized models : tools for assessing the future of complex systems?

    NARCIS (Netherlands)

    Ittersum, van M.K.; Sterk, B.

    2015-01-01

    Models are commonly used to make decisions. At some point all of us will have employed a mental model, that is, a simplification of reality, in an everyday situation. For instance, when we want to make the best decision for the environment and consider whether to buy our vegetables in a large

  2. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    Science.gov (United States)

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  3. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  4. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  5. Assimilation of stratospheric ozone in the chemical transport model STRATAQ

    Directory of Open Access Journals (Sweden)

    B. Grassi

    2004-09-01

    Full Text Available We describe a sequential assimilation approach useful for assimilating tracer measurements into a three-dimensional chemical transport model (CTM of the stratosphere. The numerical code, developed largely according to Kha00, uses parameterizations and simplifications allowing assimilation of sparse observations and the simultaneous evaluation of analysis errors, with reasonable computational requirements. Assimilation parameters are set by using χ2 and OmF (Observation minus Forecast statistics. The CTM used here is a high resolution three-dimensional model. It includes a detailed chemical package and is driven by UKMO (United Kingdom Meteorological Office analyses. We illustrate the method using assimilation of Upper Atmosphere Research Satellite/Microwave Limb Sounder (UARS/MLS ozone observations for three weeks during the 1996 antarctic spring. The comparison of results from the simulations with TOMS (Total Ozone Mapping Spectrometer measurements shows improved total ozone fields due to assimilation of MLS observations. Moreover, the assimilation gives indications on a possible model weakness in reproducing polar ozone values during springtime.

  6. Assimilation of stratospheric ozone in the chemical transport model STRATAQ

    Directory of Open Access Journals (Sweden)

    B. Grassi

    2004-09-01

    Full Text Available We describe a sequential assimilation approach useful for assimilating tracer measurements into a three-dimensional chemical transport model (CTM of the stratosphere. The numerical code, developed largely according to Kha00, uses parameterizations and simplifications allowing assimilation of sparse observations and the simultaneous evaluation of analysis errors, with reasonable computational requirements. Assimilation parameters are set by using χ2 and OmF (Observation minus Forecast statistics. The CTM used here is a high resolution three-dimensional model. It includes a detailed chemical package and is driven by UKMO (United Kingdom Meteorological Office analyses. We illustrate the method using assimilation of Upper Atmosphere Research Satellite/Microwave Limb Sounder (UARS/MLS ozone observations for three weeks during the 1996 antarctic spring. The comparison of results from the simulations with TOMS (Total Ozone Mapping Spectrometer measurements shows improved total ozone fields due to assimilation of MLS observations. Moreover, the assimilation gives indications on a possible model weakness in reproducing polar ozone values during springtime.

  7. Karst Aquifer Recharge: A Case History of over Simplification from the Uley South Basin, South Australia

    Directory of Open Access Journals (Sweden)

    Nara Somaratne

    2015-02-01

    Full Text Available The article “Karst aquifer recharge: Comments on ‘Characteristics of Point Recharge in Karst Aquifers’, by Adrian D. Werner, 2014, Water 6, doi:10.3390/w6123727” provides misrepresentation in some parts of Somaratne [1]. The description of Uley South Quaternary Limestone (QL as unconsolidated or poorly consolidated aeolianite sediments with the presence of well-mixed groundwater in Uley South [2] appears unsubstantiated. Examination of 98 lithological descriptions with corresponding drillers’ logs show only two wells containing bands of unconsolidated sediments. In Uley South basin, about 70% of salinity profiles obtained by electrical conductivity (EC logging from monitoring wells show stratification. The central and north central areas of the basin receive leakage from the Tertiary Sand (TS aquifer thereby influencing QL groundwater characteristics, such as chemistry, age and isotope composition. The presence of conduit pathways is evident in salinity profiles taken away from TS water affected areas. Pumping tests derived aquifer parameters show strong heterogeneity, a typical characteristic of karst aquifers. Uley South QL aquifer recharge is derived from three sources; diffuse recharge, point recharge from sinkholes and continuous leakage of TS water. This limits application of recharge estimation methods, such as the conventional chloride mass balance (CMB as the basic premise of the CMB is violated. The conventional CMB is not suitable for accounting chloride mass balance in groundwater systems displaying extreme range of chloride concentrations and complex mixing [3]. Over simplification of karst aquifer systems to suit application of the conventional CMB or 1-D unsaturated modelling as described in Werner [2], is not suitable use of these recharge estimation methods.

  8. Modeling as a Decision-Making Process

    Science.gov (United States)

    Bleiler-Baxter, Sarah K.; Stephens, D. Christopher; Baxter, Wesley A.; Barlow, Angela T.

    2017-01-01

    The goal in this article is to support teachers in better understanding what it means to model with mathematics by focusing on three key decision-making processes: Simplification, Relationship Mapping, and Situation Analysis. The authors use the Theme Park task to help teachers develop a vision of how students engage in these three decision-making…

  9. A systematic iterative approach to the equations of low type

    International Nuclear Information System (INIS)

    Znojil, M.

    1987-01-01

    Nonlinear singular integral equations of the Low type appear in the description of π-N scattering amplitude at relativistic energies. The standard iteration solution differs and does not give sufficiently exact results even using the Pade approximation. A new approach is proposed. Its essence lies in a repeated formal simplification of the equation accompanied by a representation of the simplified amplitude in a generalized continued-fractional form. A simple example demonstrate that the new method improves the convergence of previous approach and essentially expands the region of its convergence. From the other side, its nonequivalence to a more complicate Newton-Kantorovich method is shown. In the future more realistic applications of the method one can expect increasing of result reliability

  10. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  11. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    Science.gov (United States)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-02-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  12. Modelling of Spherical Gas Bubble Oscillations and Sonoluminescence

    Science.gov (United States)

    Prosperetti, A.; Hao, Y.

    1999-01-01

    The discovery of single-bubble sonoluminescence has led to a renewed interest in the forced radial oscillations of gas bubbles. Many of the more recent studies devoted to this topic have used several simplifications in the modelling, and in particular in accounting for liquid compressibility and thermal processes in the bubble. In this paper the significance of these simplifications is explored by contrasting the results of Lohse and co-workers with those of a more detailed model. It is found that, even though there may be little apparent difference between the radius-versus time behaviour of the bubble as predicted by the two models, quantities such as the spherical stability boundary and the threshold for rectified diffusion are affected in a quantitatively significant way. These effects are a manifestation of the subtle dependence upon dissipative processes of the phase of radial motion with respect to the driving sound field. The parameter space region, where according to the theory of Lohse and co-workers, sonoluminescence should be observable, is recalculated with the new model and is found to be enlarged with respect to the earlier estimate. The dependence of this parameter region on sound frequency is also illustrated.

  13. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    International Nuclear Information System (INIS)

    Davis, K.L.

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made

  14. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Davis, K.L. [ed.; Allison, C.M.; Berna, G.A. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States)] [and others

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made.

  15. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  16. Simplified model for determining local heat flux boundary conditions for slagging wall

    Energy Technology Data Exchange (ETDEWEB)

    Bingzhi Li; Anders Brink; Mikko Hupa [Aabo Akademi University, Turku (Finland). Process Chemistry Centre

    2009-07-15

    In this work, two models for calculating heat transfer through a cooled vertical wall covered with a running slag layer are investigated. The first one relies on a discretization of the velocity equation, and the second one relies on an analytical solution. The aim is to find a model that can be used for calculating local heat flux boundary conditions in computational fluid dynamics (CFD) analysis of such processes. Two different cases where molten deposits exist are investigated: the black liquor recovery boiler and the coal gasifier. The results show that a model relying on discretization of the velocity equation is more flexible in handling different temperature-viscosity relations. Nevertheless, a model relying on an analytical solution is the one fast enough for a potential use as a CFD submodel. Furthermore, the influence of simplifications to the heat balance in the model is investigated. It is found that simplification of the heat balance can be applied when the radiation heat flux is dominant in the balance. 9 refs., 7 figs., 10 tabs.

  17. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  18. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  19. A model for steady-state HNF combustion

    Energy Technology Data Exchange (ETDEWEB)

    Louwers, J.; Gadiot, G.M.H.J.L. [TNO Prins Maurits Lab., Rijswijk (Netherlands); Brewster, M.Q. [Univ. of Illinois, Urbana, IL (United States); Son, S.F. [Los Alamos National Lab., NM (United States)

    1997-09-01

    A simple model for the combustion of solid monopropellants is presented. The condensed phase is treated by high activation energy asymptotics. The gas phase is treated by two limit cases: high activation energy, and low activation energy. This results in simplification of the gas phase energy equation, making an (approximate) analytical solution possible. The results of the model are compared with experimental results of Hydrazinium Nitroformate (HNF) combustion.

  20. Heat and water transport in soils and across the soil-atmosphere interface: 1. Theory and different model concepts

    DEFF Research Database (Denmark)

    Vanderborght, Jan; Fetzer, Thomas; Mosthaf, Klaus

    2017-01-01

    on a theoretical level by identifying the underlying simplifications that are made for the different compartments of the system: porous medium, free flow and their interface, and by discussing how processes not explicitly considered are parameterized. Simplifications can be grouped into three sets depending......Evaporation is an important component of the soil water balance. It is composed of water flow and transport processes in a porous medium that are coupled with heat fluxes and free air flow. This work provides a comprehensive review of model concepts used in different research fields to describe...

  1. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    Science.gov (United States)

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  2. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  3. Review of design approaches of advanced pressurized LWRs. Report of a technical committee meeting and workshop

    International Nuclear Information System (INIS)

    1996-01-01

    The Technical Committee Meeting and Workshop was devoted to review and discuss differences and commonalties in the various design approaches with the aim of increasing the understanding of the design decisions taken, and a number of general conclusions were drawn. Though many differences in design approaches were found in the presentations, a number of common features could also be identified. These included design approaches to achieve further improvements with respect to safety, design simplification, reduction in cost, incorporation of feedback from operating experience, and control room improvements regarding human factors and digitization. Design approaches to achieve further improvements in safety included consideration of severe accidents in the design process, increased thermal margins and water inventories, longer grace periods and double containments. Refs, figs and tabs

  4. Review of design approaches of advanced pressurized LWRs. Report of a technical committee meeting and workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-01-01

    The Technical Committee Meeting and Workshop was devoted to review and discuss differences and commonalties in the various design approaches with the aim of increasing the understanding of the design decisions taken, and a number of general conclusions were drawn. Though many differences in design approaches were found in the presentations, a number of common features could also be identified. These included design approaches to achieve further improvements with respect to safety, design simplification, reduction in cost, incorporation of feedback from operating experience, and control room improvements regarding human factors and digitization. Design approaches to achieve further improvements in safety included consideration of severe accidents in the design process, increased thermal margins and water inventories, longer grace periods and double containments. Refs, figs and tabs.

  5. Cluster model of s- and p-shell ΛΛ hypernuclei

    Indian Academy of Sciences (India)

    simplifications the use of cluster model to S = −2 systems has given ..... constructed from Nijmegen soft-core NSC97e potential and are denoted as V e1. ΛΛ ..... This convergence of results reinforces the confidence in the methodology of all the.

  6. Deep geological isolation of nuclear waste: numerical modeling of repository scale hydrology

    International Nuclear Information System (INIS)

    Dettinger, M.D.

    1980-04-01

    The Scope of Work undertaken covers three main tasks, described as follows: (Task 1) CDM provided consulting services to the University on modeling aspects of the study having to do with transport processes involving the local groundwater system near the repository and the flow of fluids and vapors through the various porous media making up the repository system. (Task 2) CDM reviewed literature related to repository design, concentrating on effects of the repository geometry, location and other design factors on the flow of fluids within the repository boundaries, drainage from the repository structure, and the eventual transport of radionucldies away from the repository site. (Task 3) CDM, in a joint effort with LLL personnel, identified generic boundary and initial conditions, identified processes to be modeled, and recommended a modeling approach with suggestions for appropriate simplifications and approximations to the problem and identifiying important parameters necessary to model the processes. This report consists of two chapters and an appendix. The first chapter (Chapter III of the LLL report) presents a detailed description and discussion of the modeling approach developed in this project, its merits and weaknesses, and a brief review of the difficulties anticipated in implementing the approach. The second chapter (Chapter IV of the LLL report) presents a summary of a survey of researchers in the field of repository performance analysis and a discussion of that survey in light of the proposed modeling approach. The appendix is a review of the important physical processes involved in the potential hydrologic transport of radionuclides through, around and away from deep geologic nuclear waste repositories

  7. Modeling urban fire growth

    International Nuclear Information System (INIS)

    Waterman, T.E.; Takata, A.N.

    1983-01-01

    The IITRI Urban Fire Spread Model as well as others of similar vintage were constrained by computer size and running costs such that many approximations/generalizations were introduced to reduce program complexity and data storage requirements. Simplifications were introduced both in input data and in fire growth and spread calculations. Modern computational capabilities offer the means to introduce greater detail and to examine its practical significance on urban fire predictions. Selected portions of the model are described as presently configured, and potential modifications are discussed. A single tract model is hypothesized which permits the importance of various model details to be assessed, and, other model applications are identified

  8. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  9. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    Science.gov (United States)

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The effects of modeling simplifications on craniofacial finite element models: the alveoli (tooth sockets) and periodontal ligaments.

    Science.gov (United States)

    Wood, Sarah A; Strait, David S; Dumont, Elizabeth R; Ross, Callum F; Grosse, Ian R

    2011-07-07

    Several finite element models of a primate cranium were used to investigate the biomechanical effects of the tooth sockets and the material behavior of the periodontal ligament (PDL) on stress and strain patterns associated with feeding. For examining the effect of tooth sockets, the unloaded sockets were modeled as devoid of teeth and PDL, filled with teeth and PDLs, or simply filled with cortical bone. The third premolar on the left side of the cranium was loaded and the PDL was treated as an isotropic, linear elastic material using published values for Young's modulus and Poisson's ratio. The remaining models, along with one of the socket models, were used to determine the effect of the PDL's material behavior on stress and strain distributions under static premolar biting and dynamic tooth loading conditions. Two models (one static and the other dynamic) treated the PDL as cortical bone. The other two models treated it as a ligament with isotropic, linear elastic material properties. Two models treated the PDL as a ligament with hyperelastic properties, and the other two as a ligament with viscoelastic properties. Both behaviors were defined using published stress-strain data obtained from in vitro experiments on porcine ligament specimens. Von Mises stress and strain contour plots indicate that the effects of the sockets and PDL material behavior are local. Results from this study suggest that modeling the sockets and the PDL in finite element analyses of skulls is project dependent and can be ignored if values of stress and strain within the alveolar region are not required. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  12. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  13. Automatic simplification of systems of reaction-diffusion equations by a posteriori analysis.

    Science.gov (United States)

    Maybank, Philip J; Whiteley, Jonathan P

    2014-02-01

    Many mathematical models in biology and physiology are represented by systems of nonlinear differential equations. In recent years these models have become increasingly complex in order to explain the enormous volume of data now available. A key role of modellers is to determine which components of the model have the greatest effect on a given observed behaviour. An approach for automatically fulfilling this role, based on a posteriori analysis, has recently been developed for nonlinear initial value ordinary differential equations [J.P. Whiteley, Model reduction using a posteriori analysis, Math. Biosci. 225 (2010) 44-52]. In this paper we extend this model reduction technique for application to both steady-state and time-dependent nonlinear reaction-diffusion systems. Exemplar problems drawn from biology are used to demonstrate the applicability of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  15. CFD prediction of mixing in a steam generator mock-up: Comparison between full geometry and porous medium approaches

    International Nuclear Information System (INIS)

    Dehbi, A.; Badreddine, H.

    2013-01-01

    Highlights: • CFD is used to simulate single phase mixing in a model steam generator. • Motive of the work is to compare porous media approach with full geometry representation of tubes. • Porous media approach is found to compare favorably with full representation in steady states. - Abstract: In CFD simulations of single phase flow mixing in a steam generator (SG) during a station blackout severe accident, one is faced with the problem of representing the thousands of SG U-tubes. Typically simplifications are made to render the problem computationally tractable. In particular, one or a number of tubes are lumped in one volume that is treated as a single porous medium which replicates the pressure loss and heat transfer characteristics of the real tube. This approach significantly reduces the computational size of the problem and hence simulation time. In this work, we endeavor to investigate the adequacy of this approach by performing a series of simulations. We first validate the porous medium approach against results of the 1/7th scale Westinghouse SG-S3 test. In a second step, we make two separate simulations of flow in the PSI SG mock-up, i.e. one in which the porous medium model is used for the tube bundle, and another in which the full geometry is represented. In all simulations, the Reynolds Stress (RSM) model of turbulence is used. We show that in steady state conditions, the porous medium treatment yields results which are comparable to those of the full geometry representation (temperature distribution, recirculation ratio, hot plume spread, etc.). Hence, the porous medium approach can be extended with a good degree of confidence to model single phase mixing in the full scale SG

  16. Development of a simplified fuel-cladding gap conductance model for nuclear feedback calculation in 16x16 FA

    International Nuclear Information System (INIS)

    Yoo, Jong Sung; Park, Chan Oh; Park, Yong Soo

    1995-01-01

    The accurate determination of the fuel-cladding gap conductance as functions of rod burnup and power level may be a key to the design and safety analysis of a reactor. The incorporation of a sophisticated gap conductance model into nuclear design code for computing thermal hydraulic feedback effect has not been implemented mainly because of computational inefficiency due to complicated behavior of gap conductance. To avoid the time-consuming iteration scheme, simplification of the gap conductance model is done for the current design model. The simplified model considers only the heat conductance contribution to the gap conductance. The simplification is made possible by direct consideration of the gap conductivity depending on the composition of constituent gases in the gap and the fuel-cladding gap size from computer simulation of representative power histories. The simplified gap conductance model is applied to the various fuel power histories and the predicted gap conductances are found to agree well with the results of the design model

  17. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  18. A Nonlinear Ship Manoeuvering Model: Identification and adaptive control with experiments for a model ship

    Directory of Open Access Journals (Sweden)

    Roger Skjetne

    2004-01-01

    Full Text Available Complete nonlinear dynamic manoeuvering models of ships, with numerical values, are hard to find in the literature. This paper presents a modeling, identification, and control design where the objective is to manoeuver a ship along desired paths at different velocities. Material from a variety of references have been used to describe the ship model, its difficulties, limitations, and possible simplifications for the purpose of automatic control design. The numerical values of the parameters in the model is identified in towing tests and adaptive manoeuvering experiments for a small ship in a marine control laboratory.

  19. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  20. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  1. A quick, simplified approach to the evaluation of combustion rate from an internal combustion engine indicator diagram

    Directory of Open Access Journals (Sweden)

    Tomić Miroljub V.

    2008-01-01

    Full Text Available In this paper a simplified procedure of an internal combustion engine in-cylinder pressure record analysis has been presented. The method is very easy for programming and provides quick evaluation of the gas temperature and the rate of combustion. It is based on the consideration proposed by Hohenberg and Killman, but enhances the approach by involving the rate of heat transferred to the walls that was omitted in the original approach. It enables the evaluation of the complete rate of heat released by combustion (often designated as “gross heat release rate” or “fuel chemical energy release rate”, not only the rate of heat transferred to the gas (which is often designated as “net heat release rate”. The accuracy of the method has been also analyzed and it is shown that the errors caused by the simplifications in the model are very small, particularly if the crank angle step is also small. A several practical applications on recorded pressure diagrams taken from both spark ignition and compression ignition engine are presented as well.

  2. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  3. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  4. Statistical pairwise interaction model of stock market

    Science.gov (United States)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  5. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  6. Modeling of hydrogenation reactor of soya oil

    International Nuclear Information System (INIS)

    Sotudeh-Gharebagh, R.; Niknam, L.; Mostoufi, N.

    2008-01-01

    In this paper, a batch hydrogenation reactor performance was modeled using a hydrodynamic and reaction sub-models. The reaction expressions were obtained from the information reported in literature. Experimental studies were conducted in order to generate the experimental data needed to validate the model. The comparison between the experimental data and model predictions seems quite satisfactory considering the hydrodynamic limitations and simplifications made on the reaction scheme. The results of this study could be considered as framework in developing new process equipment and also soya oil product design for new applications

  7. Fatigue crack growth spectrum simplification: Facilitation of on-board damage prognosis systems

    Science.gov (United States)

    Adler, Matthew Adam

    2009-12-01

    monitoring and management of aircraft. A spectrum reduction method was proposed and experimentally validated that reduces a variable-amplitude spectrum to a constant-amplitude equivalent. The reduction from a variable-amplitude (VA) spectrum to a constant-amplitude equivalent (CAE) was proposed as a two-part process. Preliminary spectrum reduction is first performed by elimination of those loading events shown to be too negligible to significantly contribute to fatigue crack growth. This is accomplished by rainflow counting. The next step is to calculate the appropriate, equivalent maximum and minimum loads by means of a root-mean-square average. This reduced spectrum defines the CAE and replaces the original spectrum. The simplified model was experimentally shown to provide the approximately same fatigue crack growth as the original spectrum. Fatigue crack growth experiments for two dissimilar aircraft spectra across a wide-range of stress-intensity levels validated the proposed spectrum reduction procedure. Irrespective of the initial K-level, the constant-amplitude equivalent spectra were always conservative in crack growth rate, and were so by an average of 50% over the full range tested. This corresponds to a maximum 15% overestimation in driving force Delta K. Given other typical sources of scatter that occur during fatigue crack growth, a consistent 50% conservative prediction on crack growth rate is very satisfying. This is especially attractive given the reduction in cost gained by the simplification. We now have a seamless system that gives an acceptably good approximation of damage occurring in the aircraft. This contribution is significant because in a very simple way we now have given a path to bypass the current infrastructure and ground-support requirements. The decision-making is now a lot simpler. In managing an entire fleet we now have a workable system where the strength is in no need for a massive, isolated computational center. The fidelity of the model

  8. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  9. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  10. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  11. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  12. Exploring the spatial distribution of light interception and photosynthesis of canopies by means of a functional-structural plant model

    NARCIS (Netherlands)

    Sarlikioti, V.; Visser, de P.H.B.; Marcelis, L.F.M.

    2011-01-01

    Background and Aims - At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy

  13. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  15. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  16. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  17. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  18. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  19. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  20. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  1. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  2. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  3. Advanced and secure architectural EHR approaches.

    Science.gov (United States)

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context

  4. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  5. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  6. A linguistic rule-based approach to extract drug-drug interactions from pharmacological documents.

    Science.gov (United States)

    Segura-Bedmar, Isabel; Martínez, Paloma; de Pablo-Sánchez, César

    2011-03-29

    A drug-drug interaction (DDI) occurs when one drug influences the level or activity of another drug. The increasing volume of the scientific literature overwhelms health care professionals trying to be kept up-to-date with all published studies on DDI. This paper describes a hybrid linguistic approach to DDI extraction that combines shallow parsing and syntactic simplification with pattern matching. Appositions and coordinate structures are interpreted based on shallow syntactic parsing provided by the UMLS MetaMap tool (MMTx). Subsequently, complex and compound sentences are broken down into clauses from which simple sentences are generated by a set of simplification rules. A pharmacist defined a set of domain-specific lexical patterns to capture the most common expressions of DDI in texts. These lexical patterns are matched with the generated sentences in order to extract DDIs. We have performed different experiments to analyze the performance of the different processes. The lexical patterns achieve a reasonable precision (67.30%), but very low recall (14.07%). The inclusion of appositions and coordinate structures helps to improve the recall (25.70%), however, precision is lower (48.69%). The detection of clauses does not improve the performance. Information Extraction (IE) techniques can provide an interesting way of reducing the time spent by health care professionals on reviewing the literature. Nevertheless, no approach has been carried out to extract DDI from texts. To the best of our knowledge, this work proposes the first integral solution for the automatic extraction of DDI from biomedical texts.

  7. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  8. The NASA Ames Hypersonic Combustor-Model Inlet CFD Simulations and Experimental Comparisons

    Science.gov (United States)

    Venkatapathy, E.; Tokarcik-Polsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    Computations have been performed on a three-dimensional inlet associated with the NASA Ames combustor model for the hypersonic propulsion experiment in the 16-inch shock tunnel. The 3-dimensional inlet was designed to have the combustor inlet flow nearly two-dimensional and of sufficient mass flow necessary for combustion. The 16-inch shock tunnel experiment is a short duration test with test time of the order of milliseconds. The flow through the inlet is in chemical non-equilibrium. Two test entries have been completed and limited experimental results for the inlet region of the combustor-model are available. A number of CFD simulations, with various levels of simplifications such as 2-D simulations, 3-D simulations with and without chemical reactions, simulations with and without turbulent conditions, etc., have been performed. These simulations have helped determine the model inlet flow characteristics and the important factors that affect the combustor inlet flow and the sensitivity of the flow field to these simplifications. In the proposed paper, CFD modeling of the hypersonic inlet, results from the simulations and comparison with available experimental results will be presented.

  9. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  10. Sustainability of prevention practices at the workplace: safety, simplification, productivity and effectiveness.

    Science.gov (United States)

    Messineo, A; Cattaruzza, M S; Prestigiacomo, C; Giordano, F; Marsella, L T

    2017-01-01

    Traditional full-time employment has evolved into various types of occupational situations, and, nowadays, new work organization strategies have been developed. Previously overlooked risk factors have emerged, such as traffic accidents while commuting or during work hours, poor work organization, and detrimental lifestyles (like alcohol and substance abuse, although recent statistics seem to show a declining trend for the latter). The global scenario shows greater attention to occupational risks, but also, to the reduced degree of protection. Moreover, the elevated costs, the unacceptably high fatal accident rates in some sectors, the complexity of the prevention systems, the lack of prevention training, the inadequate controls (despite the numerous independent supervisory bodies) and the obsolescence of certain precepts, call for a prompt review of the regulatory system. This is especially needed for general simplification, streamlining certification bodies and minimizing references to other provisions in the legislation that make it difficult for Italian and foreign workers to read and understand the rules "without legal interpreters". "New" occupational diseases and occupational risk factors have also been reported in addition to pollution. There are concerns for continued economic and social destabilization, unemployment, commuting, temporary and precarious contracts. All of these contribute to the lack of wellbeing in the working population. Thus, the timing, duration, and types of prevention training should be carefully assessed, making prevention more appealing by evaluating costs and benefits with a widespread use of indicators that make appropriate actions for health promotion "visible", thus encouraging awareness. Although reducing prevention is never justified, it should still be "sustainable" economically in order to avoid waste of resources. It is also essential to have laws which are easily and consistently interpreted and to work on the ethics of

  11. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...

  12. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  13. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  14. High-throughput migration modelling for estimating exposure to chemicals in food packaging in screening and prioritization tools.

    Science.gov (United States)

    Ernstoff, Alexi S; Fantke, Peter; Huang, Lei; Jolliet, Olivier

    2017-11-01

    Specialty software and simplified models are often used to estimate migration of potentially toxic chemicals from packaging into food. Current models, however, are not suitable for emerging applications in decision-support tools, e.g. in Life Cycle Assessment and risk-based screening and prioritization, which require rapid computation of accurate estimates for diverse scenarios. To fulfil this need, we develop an accurate and rapid (high-throughput) model that estimates the fraction of organic chemicals migrating from polymeric packaging materials into foods. Several hundred step-wise simulations optimised the model coefficients to cover a range of user-defined scenarios (e.g. temperature). The developed model, operationalised in a spreadsheet for future dissemination, nearly instantaneously estimates chemical migration, and has improved performance over commonly used model simplifications. When using measured diffusion coefficients the model accurately predicted (R 2  = 0.9, standard error (S e ) = 0.5) hundreds of empirical data points for various scenarios. Diffusion coefficient modelling, which determines the speed of chemical transfer from package to food, was a major contributor to uncertainty and dramatically decreased model performance (R 2  = 0.4, S e  = 1). In all, this study provides a rapid migration modelling approach to estimate exposure to chemicals in food packaging for emerging screening and prioritization approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a

  16. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  17. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  18. A comparative study of finite element methodologies for the prediction of torsional response of bladed rotors

    International Nuclear Information System (INIS)

    Scheepers, R.; Heyns, P. S.

    2016-01-01

    The prevention of torsional vibration-induced fatigue damage to turbo-generators requires determining natural frequencies by either field testing or mathematical modelling. Torsional excitation methods, measurement techniques and mathematical modelling are active fields of research. However, these aspects are mostly considered in isolation and often without experimental verification. The objective of this work is to compare one dimensional (1D), full three dimensional (3D) and 3D cyclic symmetric (3DCS) Finite element (FE) methodologies for torsional vibration response. Results are compared to experimental results for a small-scale test rotor. It is concluded that 3D approaches are feasible given the current computing technology and require less simplification with potentially increased accuracy. Accuracy of 1D models may be reduced due to simplifications but faster solution times are obtained. For high levels of accuracy model updating using field test results is recommended

  19. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-03

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources. We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.

  20. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  1. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  2. Revisited global drift fluid model for linear devices

    International Nuclear Information System (INIS)

    Reiser, Dirk

    2012-01-01

    The problem of energy conserving global drift fluid simulations is revisited. It is found that for the case of cylindrical plasmas in a homogenous magnetic field, a straightforward reformulation is possible avoiding simplifications leading to energetic inconsistencies. The particular new feature is the rigorous treatment of the polarisation drift by a generalization of the vorticity equation. The resulting set of model equations contains previous formulations as limiting cases and is suitable for efficient numerical techniques. Examples of applications on studies of plasma blobs and its impact on plasma target interaction are presented. The numerical studies focus on the appearance of plasma blobs and intermittent transport and its consequences on the release of sputtered target materials in the plasma. Intermittent expulsion of particles in radial direction can be observed and it is found that although the neutrals released from the target show strong fluctuations in their propagation into the plasma column, the overall effect on time averaged profiles is negligible for the conditions considered. In addition, the numerical simulations are utilised to perform an a-posteriori assessment of the magnitude of energetic inconsistencies in previously used simplified models. It is found that certain popular approximations, in particular by the use of simplified vorticity equations, do not significantly affect energetics. However, popular model simplifications with respect to parallel advection are found to provide significant deterioration of the model consistency.

  3. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  4. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  5. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1985-01-01

    The authors have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for their flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. They model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, they develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account the fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  6. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1984-10-01

    We have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for our flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. We model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, we develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account for fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  7. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....

  8. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  9. Development of a global aerosol model using a two-dimensional sectional method: 1. Model design

    Science.gov (United States)

    Matsui, H.

    2017-08-01

    This study develops an aerosol module, the Aerosol Two-dimensional bin module for foRmation and Aging Simulation version 2 (ATRAS2), and implements the module into a global climate model, Community Atmosphere Model. The ATRAS2 module uses a two-dimensional (2-D) sectional representation with 12 size bins for particles from 1 nm to 10 μm in dry diameter and 8 black carbon (BC) mixing state bins. The module can explicitly calculate the enhancement of absorption and cloud condensation nuclei activity of BC-containing particles by aging processes. The ATRAS2 module is an extension of a 2-D sectional aerosol module ATRAS used in our previous studies within a framework of a regional three-dimensional model. Compared with ATRAS, the computational cost of the aerosol module is reduced by more than a factor of 10 by simplifying the treatment of aerosol processes and 2-D sectional representation, while maintaining good accuracy of aerosol parameters in the simulations. Aerosol processes are simplified for condensation of sulfate, ammonium, and nitrate, organic aerosol formation, coagulation, and new particle formation processes, and box model simulations show that these simplifications do not substantially change the predicted aerosol number and mass concentrations and their mixing states. The 2-D sectional representation is simplified (the number of advected species is reduced) primarily by the treatment of chemical compositions using two interactive bin representations. The simplifications do not change the accuracy of global aerosol simulations. In part 2, comparisons with measurements and the results focused on aerosol processes such as BC aging processes are shown.

  10. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  11. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  12. Integration of a modeling task in water policy design - Example of a prospective scenarios approach on an agricultural catchment

    Science.gov (United States)

    Moreau, P.; Raimbault, T.; Durand, P.; Gascuel-Odoux, C.; Salmon-Monviola, J.; Masson, V.; Cordier, M. O.

    2010-05-01

    To meet the objectives of the Water Framework Directive in terms of nitrate pollution of surface water, numerous mitigation options have been proposed. To support stakeholders' decision prior to the implementation of regulations, scenario analysis by models can be used as a prospective approach. The work developed an extensive virtual experiment design from an initial basic requirement of catchment managers. Specific objectives were (1) to test the ability of a distributed model (TNT2) to simulate hydrology and hydrochemistry on a watershed with a high diversity of production systems, (2) to analyse a large set of scenarios and their effects on water quality and (3) to propose an effective mode of communication between research scientists and catchment managers. The focus of the scenario, in accord with catchment managers' requirement, is put on winter catch crop (CC). 5 conditions of implantation in rotations, 3 CC durations and 2 CC harvest modes were tested. CC is favoured by managers because of its simplicity to implement on fields and its relative low influence on farm strategy. Calibration and validation periods were run from 1998 to 2007 and scenario simulation period from 2007 to 2020. Results have been provided, for each scenario, by compartment (soil, atmosphere, plant uptake, water) but especially in the form of nitrogen mass balance at the catchment scale. The scenarios were ranked by integrating positive and negative effects of each measure. This 3-step-process: translation of a simple stakeholder question into extensive set of scenarios (complexification) - modeling process and data analysis - restitution to catchments' manager into a simple integrative form (simplification), gives an operational tool for decision support. In term of water quality, the best improvements in nitrate concentrations at the outlet reached a decrease of 0.8 mgL-1 compared to a "business as usual" scenario and were achieved by exporting the CC residue, by extending CC

  13. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  14. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  15. Quantitative modeling of clinical, cellular, and extracellular matrix variables suggest prognostic indicators in cancer: a model in neuroblastoma.

    Science.gov (United States)

    Tadeo, Irene; Piqueras, Marta; Montaner, David; Villamón, Eva; Berbegall, Ana P; Cañete, Adela; Navarro, Samuel; Noguera, Rosa

    2014-02-01

    Risk classification and treatment stratification for cancer patients is restricted by our incomplete picture of the complex and unknown interactions between the patient's organism and tumor tissues (transformed cells supported by tumor stroma). Moreover, all clinical factors and laboratory studies used to indicate treatment effectiveness and outcomes are by their nature a simplification of the biological system of cancer, and cannot yet incorporate all possible prognostic indicators. A multiparametric analysis on 184 tumor cylinders was performed. To highlight the benefit of integrating digitized medical imaging into this field, we present the results of computational studies carried out on quantitative measurements, taken from stromal and cancer cells and various extracellular matrix fibers interpenetrated by glycosaminoglycans, and eight current approaches to risk stratification systems in patients with primary and nonprimary neuroblastoma. New tumor tissue indicators from both fields, the cellular and the extracellular elements, emerge as reliable prognostic markers for risk stratification and could be used as molecular targets of specific therapies. The key to dealing with personalized therapy lies in the mathematical modeling. The use of bioinformatics in patient-tumor-microenvironment data management allows a predictive model in neuroblastoma.

  16. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  17. Analytic nuclear scattering theories

    International Nuclear Information System (INIS)

    Di Marzio, F.; University of Melbourne, Parkville, VIC

    1999-01-01

    A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed

  18. A sensitivity study of the thermomechanical far-field model of Yucca Mountain

    International Nuclear Information System (INIS)

    Brandshaug, T.

    1991-04-01

    A sensitivity study has been conducted investigating the predicted thermal and mechanical behavior of the far-field model of a proposed nuclear waste repository at Yucca Mountain. The model input parameters and phenomena that have been investigated include areal power density, thermal conductivity, specific heat capacity, material density, pore water boiling, stratigraphic and topographic simplifications Young's modulus, Poisson's ratio, coefficient of thermal expansion, in situ stress, rock matrix cohesion, rock matrix angle of internal friction, rock joint cohesion, and rock joint angle of internal friction. Using the range in values currently associated with these parameters, predictions were obtained for rock temperatures, stresses, matrix failure, and joint activity throughout the far-field model. Results show that the range considered for the areal power density has the most significant effect on the predicted rock temperatures. The range considered for the in situ stress has the most significant effect on the prediction of rock stresses and factors-of-safety for the matrix and joints. Predictions of matrix and joint factors-of-safety are also influenced significantly by the use of stratigraphic and topographic simplifications. 16 refs., 75 figs., 13 tabs

  19. Modeling strategy of the source and sink terms in the two-group interfacial area transport equation

    International Nuclear Information System (INIS)

    Ishii, Mamoru; Sun Xiaodong; Kim, Seungjin

    2003-01-01

    This paper presents the general strategy for modeling the source and sink terms in the two-group interfacial area transport equation. The two-group transport equation is applicable in bubbly, cap bubbly, slug, and churn-turbulent flow regimes to predict the change of the interfacial area concentration. This dynamic approach has an advantage of flow regime-independence over the conventional empirical correlation approach for the interfacial area concentration in the applications with the two-fluid model. In the two-group interfacial area transport equation, bubbles are categorized into two groups: spherical/distorted bubbles as Group 1 and cap/slug/churn-turbulent bubbles as Group 2. Thus, two sets of equations are used to describe the generation and destruction rates of bubble number density, void fraction, and interfacial area concentration for the two groups of bubbles due to bubble expansion and compression, coalescence and disintegration, and phase change. Based upon a detailed literature review of the research on the bubble interactions, five major bubble interaction mechanisms are identified for the gas-liquid two-phase flow of interest. A systematic integral approach, in which the significant variations of bubble volume and shape are accounted for, is suggested for the modeling of two-group bubble interactions. To obtain analytical forms for the various bubble interactions, a simplification is made for the bubble number density distribution function

  20. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  1. A simplified lumped model for the optimization of post-buckled beam architecture wideband generator

    Science.gov (United States)

    Liu, Weiqun; Formosa, Fabien; Badel, Adrien; Hu, Guangdi

    2017-11-01

    Buckled beams structures are a classical kind of bistable energy harvesters which attract more and more interests because of their capability to scavenge energy over a large frequency band in comparison with linear generator. The usual modeling approach uses the Galerkin mode discretization method with relatively high complexity, while the simplification with a single-mode solution lacks accuracy. It stems on the optimization of the energy potential features to finally define the physical and geometrical parameters. Therefore, in this paper, a simple lumped model is proposed with explicit relationship between the potential shape and parameters to allow efficient design of bistable beams based generator. The accuracy of the approximation model is studied with the effectiveness of application analyzed. Moreover, an important fact, that the bending stiffness has little influence on the potential shape with low buckling level and the sectional area determined, is found. This feature extends the applicable range of the model by utilizing the design of high moment of inertia. Numerical investigations demonstrate that the proposed model is a simple and reliable tool for design. An optimization example of using the proposed model is demonstrated with satisfactory performance.

  2. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  3. Extended Hubbard models for ultracold atoms in optical lattices

    International Nuclear Information System (INIS)

    Juergensen, Ole

    2015-01-01

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  4. Extended Hubbard models for ultracold atoms in optical lattices

    Energy Technology Data Exchange (ETDEWEB)

    Juergensen, Ole

    2015-06-05

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  5. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  6. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  7. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  8. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  9. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    International Nuclear Information System (INIS)

    Klos, Richard

    2008-03-01

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  10. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  11. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  12. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  13. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  14. 3D City Models with Different Temporal Characteristica

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    traditional static city models and those models that are built for realtime applications. The difference between the city models applies both to the spatial modelling and also when using the phenomenon time in the models. If the city models are used in visualizations without any variation in time or when......-built dynamic or a model suitable for visualization in realtime, it is required that modelling is done with level-of-detail and simplification of both the aesthetics and the geometry. If a temporal characteristic is combined with a visual characteristic, the situation can easily be seen as a t/v matrix where t...... is the temporal characteristic or representation and v is the visual characteristic or representation....

  15. A systemic approach to modelling of radiobiological effects

    International Nuclear Information System (INIS)

    Obaturov, G.M.

    1988-01-01

    Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels

  16. Queueing networks a fundamental approach

    CERN Document Server

    Dijk, Nico

    2011-01-01

    This handbook aims to highlight fundamental, methodological and computational aspects of networks of queues to provide insights and to unify results that can be applied in a more general manner.  The handbook is organized into five parts: Part 1 considers exact analytical results such as of product form type. Topics include characterization of product forms by physical balance concepts and simple traffic flow equations, classes of service and queue disciplines that allow a product form, a unified description of product forms for discrete time queueing networks, insights for insensitivity, and aggregation and decomposition results that allow subnetworks to be aggregated into single nodes to reduce computational burden. Part 2 looks at monotonicity and comparison results such as for computational simplification by either of two approaches: stochastic monotonicity and ordering results based on the ordering of the proces generators, and comparison results and explicit error bounds based on an underlying Markov r...

  17. Development of ITER 3D neutronics model and nuclear analyses

    International Nuclear Information System (INIS)

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  18. Implementation of Push Recovery Strategy Using Triple Linear Inverted Pendulum Model in “T-FloW” Humanoid Robot

    Science.gov (United States)

    Dimas Pristovani, R.; Raden Sanggar, D.; Dadet, Pramadihanto.

    2018-04-01

    Push recovery is one of humanbehaviorwhich is a strategy to defend the body from anexternal force in any environment. This paper describes push recovery strategy which usesMIMO decoupled control system method. The dynamics system uses aquasi-dynamic system based on triple linear inverted pendulum model (TLIPM). The analysis of TLIPMuses zero moment point (ZMP) calculation from ZMP simplification in last research. By using this simplification of dynamics system, the control design can be simplified into 3 serial SISOwith known and uncertain disturbance models in each inverted pendulum. Each pendulum has different plan to damp the external force effect. In this experiment, PID controller (closed- loop)is used to arrange the damp characteristic.The experiment result shows thatwhen using push recovery control strategy (closed-loop control) is about 85.71% whilewithout using push recovery control strategy (open-loop control) it is about 28.57%.

  19. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  20. Dynamics and control of quadcopter using linear model predictive control approach

    Science.gov (United States)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  1. Model-centric approaches for the development of health information systems.

    Science.gov (United States)

    Tuomainen, Mika; Mykkänen, Juha; Luostarinen, Heli; Pöyhölä, Assi; Paakkanen, Esa

    2007-01-01

    Modeling is used increasingly in healthcare to increase shared knowledge, to improve the processes, and to document the requirements of the solutions related to health information systems (HIS). There are numerous modeling approaches which aim to support these aims, but a careful assessment of their strengths, weaknesses and deficiencies is needed. In this paper, we compare three model-centric approaches in the context of HIS development: the Model-Driven Architecture, Business Process Modeling with BPMN and BPEL and the HL7 Development Framework. The comparison reveals that all these approaches are viable candidates for the development of HIS. However, they have distinct strengths and abstraction levels, they require local and project-specific adaptation and offer varying levels of automation. In addition, illustration of the solutions to the end users must be improved.

  2. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  3. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  4. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  5. Parametric Approach to Assessing Performance of High-Lift Device Active Flow Control Architectures

    Directory of Open Access Journals (Sweden)

    Yu Cai

    2017-02-01

    Full Text Available Active Flow Control is at present an area of considerable research, with multiple potential aircraft applications. While the majority of research has focused on the performance of the actuators themselves, a system-level perspective is necessary to assess the viability of proposed solutions. This paper demonstrates such an approach, in which major system components are sized based on system flow and redundancy considerations, with the impacts linked directly to the mission performance of the aircraft. Considering the case of a large twin-aisle aircraft, four distinct active flow control architectures that facilitate the simplification of the high-lift mechanism are investigated using the demonstrated approach. The analysis indicates a very strong influence of system total mass flow requirement on architecture performance, both for a typical mission and also over the entire payload-range envelope of the aircraft.

  6. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  7. Plate-forme de réalité virtuelle pour l'étude de l'accessibilité et de l'extraction de lampes sur prototype virtuel automobile

    OpenAIRE

    Chamaret , Damien

    2010-01-01

    In recent years, most car manufacturers innovate by using techniques of virtual reality (VR). This approach has great potential in terms of saving time and reducing costs. It also allows assessing new approaches related to the design process itself. However, a number of technological and methodological locks remain. They relate in particular (i) simplification and physicalization of digital models from CAD software, (ii) the development of visual-haptic configurations suited to different task...

  8. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  9. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems

    National Research Council Canada - National Science Library

    Qureshi, Zahid H

    2008-01-01

    .... This report provides a review of key traditional accident modelling approaches and their limitations, and describes new system-theoretic approaches to the modelling and analysis of accidents in safety-critical systems...

  10. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  11. Model-based analysis of digital radio frequency control systems for a heavy-ion synchrotron

    International Nuclear Information System (INIS)

    Spies, Christopher

    2013-12-01

    In this thesis, we investigate the behavior of different radio frequency control systems in a heavy-ion synchrotron, which act on the electrical fields used to accelerate charged particles, along with the longitudinal dynamics of the particles in the beam. Due to the large physical dimensions of the system, the required precision can only be achieved by a distributed control system. Since the plant is highly nonlinear and the overall system is very complex, a purely analytical treatment is not possible without introducing unacceptable simplifications. Instead, we use numerical simulation to investigate the system behavior. This thesis arises from a cooperation between the Institute of Microelectronic Systems at Technische Universitaet Darmstadt and the GSI Helmholtz Center for Heavy-Ion Research. A new heavy-ion synchrotron, the SIS100, is currently being built at GSI; its completion is scheduled for 2016. The starting point for the present thesis was the question whether a control concept previously devised at GSI is feasible - not only in the ideal case, but in the presence of parameter deviations, noise, and other disturbances - and how it can be optimized. In this thesis, we present a system model of a heavy-ion synchrotron. This model comprises the beam dynamics, the relevant components of the accelerator, and the relevant controllers as well as the communication between those controllers. We discuss the simulation techniques as well as several simplifications we applied in order to be able to simulate the model in an acceptable amount of time and show that these simplifications are justified. Using the model, we conducted several case studies in order to demonstrate the practical feasibility of the control concept, analyze the system's sensitivity towards disturbances and explore opportunities for future extensions. We derive specific suggestions for improvements from our results. Finally, we demonstrate that the model represents the physical reality

  12. Mathematical modeling of a biogenous filter cake and identification of oilseed material parameters

    Directory of Open Access Journals (Sweden)

    Očenášek J.

    2009-12-01

    Full Text Available Mathematical modeling of the filtration and extrusion process inside a linear compression chamber has gained a lot of attention during several past decades. This subject was originally related to mechanical and hydraulic properties of soils (in particular work of Terzaghi and later was this approach adopted for the modeling of various technological processes in the chemical industry (work of Shirato. Developed mathematical models of continuum mechanics of porous materials with interstitial fluid were then applied also to the problem of an oilseed expression. In this case, various simplifications and partial linearizations are introduced in models for the reason of an analytical or numerical solubility; or it is not possible to generalize the model formulation into the fully 3D problem of an oil expression extrusion with a complex geometry such as it has a screw press extruder.We proposed a modified model for the oil seeds expression process in a linear compression chamber. The model accounts for the rheological properties of the deformable solid matrix of compressed seed, where the permeability of the porous solid is described by the Darcy's law. A methodology of the experimental work necessary for a material parameters identification is presented together with numerical simulation examples.

  13. A new approach for the prediction of thermal efficiency in solar receivers

    International Nuclear Information System (INIS)

    Barbero, Rubén; Rovira, Antonio; Montes, María José; Martínez Val, José María

    2016-01-01

    Highlights: • A new model for thermal efficiency calculation of solar collectors is developed. • It is derived from the complete differential equation for any technology. • Accurately capture the results of numerical models avoiding iteration process. • Two new critical parameters are defined to be considered for design. • Some relevant aspects for design arise from its application to PTC. - Abstract: Optimization of solar concentration receiver designs requires of models that characterize thermal balance at receiver wall. This problem depends on external heat transfer coefficients that are a function of the third power of the temperature at the absorber wall. This nonlinearity introduces a difficulty in obtaining analytical solutions for the balance differential equations. So, nowadays, several approximations consider these heat transfer coefficients as a constant or suggest a linear dependence. These hypotheses suppose an important limitation for their application. This paper describes a new approach that allows the use of an analytical expression obtained from the heat balance differential equation. Two simplifications based on this model can be made in order to obtain other much simpler equations that adequately characterize collector performance for the majority of solar technologies. These new equations allow the explicit calculation of the efficiency as a function of some characteristic parameters of the receiver. This explicit calculation introduces some advantages in the receiver optimization process because iteration processes are avoided during the calculations. Validation of the proposed models was made by the use of the experimental measurements reported by Sandia National Laboratories (SNL) for the trough collector design LS-2.

  14. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  15. A revised multi-Fickian moisture transport model to describe non-Fickian effects in wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund; Damkilde, Lars; Svensson, Staffan

    2007-01-01

    This paper presents a study and a refinement of the sorption rate model in a so-called multi-Fickian or multi-phase model. This type of model describes the complex moisture transport system in wood, which consists of separate water vapor and bound-water diffusion interacting through sorption...... sorption allow a simplification of the system to be modeled by a single Fickian diffusion equation. To determine the response of the system, the sorption rate model is essential. Here the function modeling the moisture-dependent adsorption rate is investigated based on existing experiments on thin wood...

  16. Numeric, Agent-based or System dynamics model? Which modeling approach is the best for vast population simulation?

    Science.gov (United States)

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-02-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25 % of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Simple Heuristic Approach to Introduction of the Black-Scholes Model

    Science.gov (United States)

    Yalamova, Rossitsa

    2010-01-01

    A heuristic approach to explaining of the Black-Scholes option pricing model in undergraduate classes is described. The approach draws upon the method of protocol analysis to encourage students to "think aloud" so that their mental models can be surfaced. It also relies upon extensive visualizations to communicate relationships that are…

  18. Fast and Safe Concrete Code Execution for Reinforcing Static Analysis and Verification

    Directory of Open Access Journals (Sweden)

    M. Belyaev

    2015-01-01

    Full Text Available The problem of improving precision of static analysis and verification techniques for C is hard due to simplification assumptions these techniques make about the code model. We present a novel approach to improving precision by executing the code model in a controlled environment that captures program errors and contract violations in a memory and time efficient way. We implemented this approach as an executor module Tassadar as a part of bounded model checker Borealis. We tested Tassadar on two test sets, showing that its impact on performance of Borealis is minimal.The article is published in the authors’ wording.

  19. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  20. Simulation of the space debris environment in LEO using a simplified approach

    Science.gov (United States)

    Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico

    2017-01-01

    Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC

  1. A comparative study of 1D and 3D hemodynamics in patient-specific hepatic portal vein networks

    Directory of Open Access Journals (Sweden)

    Jonášová A.

    2014-12-01

    Full Text Available The development of software for use in clinical practice is often associated with many requirements and restrictions set not only by the medical doctors, but also by the hospital’s budget. To meet the requirement of reliable software, which is able to provide results within a short time period and with minimal computational demand, a certain measure of modelling simplification is usually inevitable. In case of blood flow simulations carried out in large vascular networks such as the one created by the hepatic portal vein, simplifications are made by necessity. The most often employed simplification includes the approach in the form of dimensional reduction, when the 3D model of a large vascular network is substituted with its 1D counterpart. In this context, a question naturally arises, how this reduction can affect the simulation accuracy and its outcome. In this paper, we try to answer this question by performing a quantitative comparison of 3D and 1D flow models in two patient-specific hepatic portal vein networks. The numerical simulations are carried out under average flow conditions and with the application of the three-element Windkessel model, which is able to approximate the downstream flow resistance of real hepatic tissue. The obtained results show that, although the 1D model can never truly substitute the 3D model, its easy implementation, time-saving model preparation and almost no demands on computer technology dominate as advantages over obvious but moderate modelling errors arising from the performed dimensional reduction.

  2. A new approach to Naturalness in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.

  3. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  4. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  5. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  6. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  7. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  8. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...

  9. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  10. Sistematização de Normas Regulatórias: uma abordagem baseada no neo-institucionalismo / Systematizing Regulations: An Approach Focused on Neo-Institutionalism

    Directory of Open Access Journals (Sweden)

    João Alberto de Oliveira Lima

    2016-05-01

    Full Text Available Purpose – This paper proposes an approach based on neo-institutionalism to assist regulations systematization activity. Methodology/approach/design – Interdisciplinary bibliographic research in Philosophy of Language, Information Science and Law. Findings – The neo-institutionalism, mainly in the form proposed by Dick Ruiter, offers a conceptual framework that can assist in systematizing regulations. Practical implications – This article proposes an approach to the systematization of regulations that, despite its importance to the simplification of the legal system and to the legal certainty, has not been executed as required by law. Originality/value – The originality of the proposed approach lies in the combination of concepts and theories from Philosophy of Language, Information Science and Law, as a way to deal with the problem of regulations systematization.

  11. Modelling efficient innovative work: integration of economic and social psychological approaches

    Directory of Open Access Journals (Sweden)

    Babanova Yulia

    2017-01-01

    Full Text Available The article deals with the relevance of integration of economic and social psychological approaches to the solution of enhancing the efficiency of innovation management. The content, features and specifics of the modelling methods within each of approaches are unfolded and options of integration are considered. The economic approach lies in the generation of the integrated matrix concept of management of innovative development of an enterprise in line with the stages of innovative work and the use of the integrated vector method for the evaluation of the innovative enterprise development level. The social psychological approach lies in the development of a system of psychodiagnostic indexes of activity resources within the scope of psychological innovative audit of enterprise management and development of modelling methods for the balance of activity trends. Modelling the activity resources is based on the system of equations accounting for the interaction type of psychodiagnostic indexes. Integration of two approaches includes a methodological level, a level of empirical studies and modelling methods. There are suggested options of integrating the economic and psychological approaches to analyze available material and non-material resources of the enterprises’ innovative work and to forecast an optimal option of development based on the implemented modelling methods.

  12. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  13. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  14. Modeling and numerical study of transfers in fissured environments; Modelisation et etude numerique des transferts en milieux fissures

    Energy Technology Data Exchange (ETDEWEB)

    Granet, S.

    2000-01-28

    Oil recovery from fractured reservoirs plays a very important role in the petroleum industry. Some of the world most productive oil fields are located in naturally fractured reservoirs. Modelling flow in such a fracture network is a very complex problem. This is conventionally done using a specific idealized model. This model is based on the Warren and Root representation and on a dual porosity, dual permeability approach. A simplified formulation of matrix-fracture fluid transfers uses a pseudo-steady-state transfer equation involving a constant exchange coefficient. Such a choice is one of the main difficulties of this approach. To get a better understanding of the simplifications involved in the dual porosity approach a reference model must be available. To obtain such a fine description, we have developed a new methodology. This technique called 'the fissure element methodology' is based on a specific gridding of the fractured medium. The fissure network is gridded with linear elements coupled with an unstructured triangular grid of matrix. An appropriate finite volume scheme has been developed to provide a good description of the flow. The numerical development of is precisely described. A simulator has been developed using this method. Several simulations have been realised. Comparisons have been done with different dual-porosity dual-permeability models. A reflexion concerning the choice of the exchange coefficient used in the dual porosity model is then proposed. This new tool has permit to have a better understanding of the production mechanisms of a complex fractured reservoir. (author)

  15. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  16. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  17. On the Numerical Modeling of Confined Masonry Structures for In-plane Earthquake Loads

    Directory of Open Access Journals (Sweden)

    Mircea Barnaure

    2015-07-01

    Full Text Available The seismic design of confined masonry structures involves the use of numerical models. As there are many parameters that influence the structural behavior, these models can be very complex and unsuitable for the current design purposes of practicing engineers. Simplified models could lead to reasonably accurate results, but caution should be given to the simplification assumptions. An analysis of various parameters considered in the numerical modeling of confined masonry structural walls is made. Conclusions regarding the influence of simplified procedures on the results are drawn.

  18. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The statistics of multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    We propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. We present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplifications of the leading-particle statistics theory. A more comprehensive exposition will appear before long. (author). 32 refs, 4 figs

  20. Randomness in multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    The authors propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. They present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplification of the leading-particle statistics theory

  1. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  2. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  3. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen

    2013-01-01

    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  4. Biotic interactions in the face of climate change: a comparison of three modelling approaches.

    Directory of Open Access Journals (Sweden)

    Anja Jaeschke

    Full Text Available Climate change is expected to alter biotic interactions, and may lead to temporal and spatial mismatches of interacting species. Although the importance of interactions for climate change risk assessments is increasingly acknowledged in observational and experimental studies, biotic interactions are still rarely incorporated in species distribution models. We assessed the potential impacts of climate change on the obligate interaction between Aeshna viridis and its egg-laying plant Stratiotes aloides in Europe, based on an ensemble modelling technique. We compared three different approaches for incorporating biotic interactions in distribution models: (1 We separately modelled each species based on climatic information, and intersected the future range overlap ('overlap approach'. (2 We modelled the potential future distribution of A. viridis with the projected occurrence probability of S. aloides as further predictor in addition to climate ('explanatory variable approach'. (3 We calibrated the model of A. viridis in the current range of S. aloides and multiplied the future occurrence probabilities of both species ('reference area approach'. Subsequently, all approaches were compared to a single species model of A. viridis without interactions. All approaches projected a range expansion for A. viridis. Model performance on test data and amount of range gain differed depending on the biotic interaction approach. All interaction approaches yielded lower range gains (up to 667% lower than the model without interaction. Regarding the contribution of algorithm and approach to the overall uncertainty, the main part of explained variation stems from the modelling algorithm, and only a small part is attributed to the modelling approach. The comparison of the no-interaction model with the three interaction approaches emphasizes the importance of including obligate biotic interactions in projective species distribution modelling. We recommend the use of

  5. Top-down approach to unified supergravity models

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-03-01

    We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)

  6. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  7. Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.

    Science.gov (United States)

    Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z

    2018-01-01

    Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.

  8. Models of care and delivery

    DEFF Research Database (Denmark)

    Lundgren, Jens

    2014-01-01

    with community clinics for injecting drug-dependent persons is also being implemented. Shared care models require oversight to ensure that primary responsibility is defined for the persons overall health situation, for screening of co-morbidities, defining indication to treat comorbidities, prescription of non......Marked regional differences in HIV-related clinical outcomes exist across Europe. Models of outpatient HIV care, including HIV testing, linkage and retention for positive persons, also differ across the continent, including examples of sub-optimal care. Even in settings with reasonably good...... outcomes, existing models are scrutinized for simplification and/or reduced cost. Outpatient HIV care models across Europe may be centralized to specialized clinics only, primarily handled by general practitioners (GP), or a mixture of the two, depending on the setting. Key factors explaining...

  9. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  10. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  11. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  12. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  13. An integrated approach for integrated intelligent instrumentation and control system (I3CS)

    International Nuclear Information System (INIS)

    Jung, C.H.; Kim, J.T.; Kwon, K.C.

    1997-01-01

    Nuclear power plants to guarantee the safety of public should be designed to reduce the operator intervention resulting in operating human errors, identify the process states in transients, and aid to make a decision of their tasks and guide operator actions. For the sake of this purpose, MMIS(MAN-Machine Interface System) in NPPs should be the integrated top-down approach tightly focused on the function-based task analysis including an advanced digital technology, an operator support function, and so on. The advanced I and C research team in KAERI has embarked on developing an Integrated Intelligent Instrumentation and Control System (I 3 CS) for Korea's next generation nuclear power plants. I 3 CS bases the integrated top-down approach on the function-based task analysis, modern digital technology, standardization and simplification, availability and reliability, and protection of investment. (author). 4 refs, 6 figs

  14. Designing water demand management schemes using a socio-technical modelling approach.

    Science.gov (United States)

    Baki, Sotiria; Rozos, Evangelos; Makropoulos, Christos

    2018-05-01

    Although it is now widely acknowledged that urban water systems (UWSs) are complex socio-technical systems and that a shift towards a socio-technical approach is critical in achieving sustainable urban water management, still, more often than not, UWSs are designed using a segmented modelling approach. As such, either the analysis focuses on the description of the purely technical sub-system, without explicitly taking into account the system's dynamic socio-economic processes, or a more interdisciplinary approach is followed, but delivered through relatively coarse models, which often fail to provide a thorough representation of the urban water cycle and hence cannot deliver accurate estimations of the hydrosystem's responses. In this work we propose an integrated modelling approach for the study of the complete socio-technical UWS that also takes into account socio-economic and climatic variability. We have developed an integrated model, which is used to investigate the diffusion of household water conservation technologies and its effects on the UWS, under different socio-economic and climatic scenarios. The integrated model is formed by coupling a System Dynamics model that simulates the water technology adoption process, and the Urban Water Optioneering Tool (UWOT) for the detailed simulation of the urban water cycle. The model and approach are tested and demonstrated in an urban redevelopment area in Athens, Greece under different socio-economic scenarios and policy interventions. It is suggested that the proposed approach can establish quantifiable links between socio-economic change and UWS responses and therefore assist decision makers in designing more effective and resilient long-term strategies for water conservation. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria

    Science.gov (United States)

    Kowalczuk, Zdzisław; Białaszewski, Tomasz

    2018-01-01

    A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.

  16. Orthogonality-condition model for bound states with a separable expansion of the potential

    International Nuclear Information System (INIS)

    Pal, K.F.

    1984-01-01

    A very efficient solution of the equation of Saito's orthogonality-condition model (OCM) is reported for bound states by means of a separable expansion of the potential (PSE method). Some simplifications of the published formulae of the PSE method is derived, which facilitate its application to the OCM and may be useful in solving the Schroedinger equation as well. (author)

  17. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  18. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  19. Mathematical Model of the Jet Engine Fuel System

    Directory of Open Access Journals (Sweden)

    Klimko Marek

    2015-01-01

    Full Text Available The paper discusses the design of a simplified mathematical model of the jet (turbo-compressor engine fuel system. The solution will be based on the regulation law, where the control parameter is a fuel mass flow rate and the regulated parameter is the rotational speed. A differential equation of the jet engine and also differential equations of other fuel system components (fuel pump, throttle valve, pressure regulator will be described, with respect to advanced predetermined simplifications.

  20. Mathematical Model of the Jet Engine Fuel System

    Science.gov (United States)

    Klimko, Marek

    2015-05-01

    The paper discusses the design of a simplified mathematical model of the jet (turbo-compressor) engine fuel system. The solution will be based on the regulation law, where the control parameter is a fuel mass flow rate and the regulated parameter is the rotational speed. A differential equation of the jet engine and also differential equations of other fuel system components (fuel pump, throttle valve, pressure regulator) will be described, with respect to advanced predetermined simplifications.

  1. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  2. Homogenised constitutive model dedicated to reinforced concrete plates subjected to seismic solicitations

    International Nuclear Information System (INIS)

    Combescure, Christelle

    2013-01-01

    Safety reassessments are periodically performed on the EDF nuclear power plants and the recent seismic reassessments leaded to the necessity of taking into account the non-linear behaviour of materials when modeling and simulating industrial structures of these power plants under seismic solicitations. A large proportion of these infrastructures is composed of reinforced concrete buildings, including reinforced concrete slabs and walls, and literature seems to be poor on plate modeling dedicated to seismic applications for this material. As for the few existing models dedicated to these specific applications, they present either a lack of dissipation energy in the material behaviour, or no micromechanical approach that justifies the parameters needed to properly describe the model. In order to provide a constitutive model which better represents the reinforced concrete plate behaviour under seismic loadings and whose parameters are easier to identify for the civil engineer, a constitutive model dedicated to reinforced concrete plates under seismic solicitations is proposed: the DHRC (Dissipative Homogenised Reinforced Concrete) model. Justified by a periodic homogenisation approach, this model includes two dissipative phenomena: damage of concrete matrix and internal sliding at the interface between steel rebar and surrounding concrete. An original coupling term between damage and sliding, resulting from the homogenisation process, induces a better representation of energy dissipation during the material degradation. The model parameters are identified from the geometric characteristics of the plate and a restricted number of material characteristics, allowing a very simple use of the model. Numerical validations of the DHRC model are presented, showing good agreement with experimental behaviour. A one dimensional simplification of the DHRC model is proposed, allowing the representation of reinforced concrete bars and simplified models of rods and wire mesh

  3. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  4. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  5. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  6. A fuzzy approach for modelling radionuclide in lake system

    International Nuclear Information System (INIS)

    Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.

    2013-01-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem

  7. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  8. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  9. Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

    KAUST Repository

    Dreano, Denis

    2017-01-01

    concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based

  10. High dimensions - a new approach to fermionic lattice models

    International Nuclear Information System (INIS)

    Vollhardt, D.

    1991-01-01

    The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)

  11. Simulation Experiment on Landing Site Selection Using a Simple Geometric Approach

    Science.gov (United States)

    Zhao, W.; Tong, X.; Xie, H.; Jin, Y.; Liu, S.; Wu, D.; Liu, X.; Guo, L.; Zhou, Q.

    2017-07-01

    Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance) could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses). So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  12. SIMULATION EXPERIMENT ON LANDING SITE SELECTION USING A SIMPLE GEOMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    W. Zhao

    2017-07-01

    Full Text Available Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses. So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  13. Tritium permeation model for plasma facing components

    Science.gov (United States)

    Longhurst, G. R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.

  14. Tritium permeation model for plasma facing components

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included

  15. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  16. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  17. Rainbow tensor model with enhanced symmetry and extreme melonic dominance

    Directory of Open Access Journals (Sweden)

    H. Itoyama

    2017-08-01

    Full Text Available We introduce and briefly analyze the rainbow tensor model where all planar diagrams are melonic. This leads to considerable simplification of the large N limit as compared to that of the matrix model: in particular, what are dressed in this limit are propagators only, which leads to an oversimplified closed set of Schwinger–Dyson equations for multi-point correlators. We briefly touch upon the Ward identities, the substitute of the spectral curve and the AMM/EO topological recursion and their possible connections to Connes–Kreimer theory and forest formulas.

  18. Rainbow tensor model with enhanced symmetry and extreme melonic dominance

    Science.gov (United States)

    Itoyama, H.; Mironov, A.; Morozov, A.

    2017-08-01

    We introduce and briefly analyze the rainbow tensor model where all planar diagrams are melonic. This leads to considerable simplification of the large N limit as compared to that of the matrix model: in particular, what are dressed in this limit are propagators only, which leads to an oversimplified closed set of Schwinger-Dyson equations for multi-point correlators. We briefly touch upon the Ward identities, the substitute of the spectral curve and the AMM/EO topological recursion and their possible connections to Connes-Kreimer theory and forest formulas.

  19. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  20. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    International Nuclear Information System (INIS)

    Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.

    2011-01-01

    The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  1. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Anoba, R.C.

    2005-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  2. Simplification to abacavir/lamivudine + atazanavir maintains viral suppression and improves bone and renal biomarkers in ASSURE, a randomized, open label, non-inferiority trial.

    Directory of Open Access Journals (Sweden)

    David A Wohl

    Full Text Available Simplification of antiretroviral therapy in patients with suppressed viremia may minimize long-term adverse effects. The study's primary objective was to determine whether abacavir/lamivudine + atazanavir (ABC/3TC+ATV was virologically non-inferior to tenofovir/emtricitabine + atazanavir/ritonavir (TDF/FTC+ATV/r over 24 weeks in a population of virologically suppressed, HIV-1 infected patients.This open-label, multicenter, non-inferiority study enrolled antiretroviral experienced, HIV-infected adults currently receiving a regimen of TDF/FTC+ATV/r for ≥ 6 months with no history of virologic failure and whose HIV-1 RNA had been ≤ 75 copies/mL on 2 consecutive measurements including screening. Patients were randomized 1 ∶ 2 to continue current treatment or simplify to ABC/3TC+ATV.The primary endpoint was the proportion of patients with HIV-RNA<50 copies/mL at Week 24 by the Time to Loss of Virologic Response (TLOVR algorithm. Secondary endpoints included alternative measures of efficacy, adverse events (AEs, and fasting lipids. Exploratory endpoints included inflammatory, coagulation, bone, and renal biomarkers.After 24 weeks, ABC/3TC+ATV (n = 199 was non-inferior to TDF/FTC+ATV/r (n = 97 by both the primary analysis (87% in both groups and all secondary efficacy analyses. Rates of grade 2-4 AEs were similar between the two groups (40% vs 37%, respectively, but an excess of hyperbilirubinemia made the rate of grade 3-4 laboratory abnormalities higher in the TDF/FTC+ATV/r group (30% compared with the ABC/3TC+ATV group (13%. Lipid levels were stable except for HDL cholesterol, which increased significantly in the ABC/3TC+ATV group. Bone and renal biomarkers improved significantly between baseline and Week 24 in patients taking ABC/3TC+ATV, and the difference between groups was significant at Week 24. No significant changes occurred in any inflammatory or coagulation biomarker within or between treatment groups.After 24 weeks, simplification to

  3. Innovative mathematical modeling in environmental remediation

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Gour T. [Taiwan Typhoon and Flood Research Institute (Taiwan); National Central Univ. (Taiwan); Univ. of Central Florida (United States); Gwo, Jin Ping [Nuclear Regulatory Commission (NRC), Rockville, MD (United States); Siegel, Malcolm D. [Sandia National Laboratories, Albuquerque, NM (United States); Li, Ming-Hsu [National Central Univ. (Taiwan); ; Fang, Yilin [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Zhang, Fan [Inst. of Tibetan Plateau Research, Chinese Academy of Sciences (China); Luo, Wensui [Inst. of Tibetan Plateau Research, Chinese Academy of Sciences (China); Yabusaki, Steven B. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)

    2013-05-01

    There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models

  4. Innovative mathematical modeling in environmental remediation

    International Nuclear Information System (INIS)

    Yeh, Gour T.; Gwo, Jin Ping; Siegel, Malcolm D.; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steven B.

    2013-01-01

    There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co). The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models

  5. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  7. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  8. Integrating UML, the Q-model and a Multi-Agent Approach in Process Specifications and Behavioural Models of Organisations

    Directory of Open Access Journals (Sweden)

    Raul Savimaa

    2005-08-01

    Full Text Available Efficient estimation and representation of an organisation's behaviour requires specification of business processes and modelling of actors' behaviour. Therefore the existing classical approaches that concentrate only on planned processes are not suitable and an approach that integrates process specifications with behavioural models of actors should be used instead. The present research indicates that a suitable approach should be based on interactive computing. This paper examines the integration of UML diagrams for process specifications, the Q-model specifications for modelling timing criteria of existing and planned processes and a multi-agent approach for simulating non-deterministic behaviour of human actors in an organisation. The corresponding original methodology is introduced and some of its applications as case studies are reviewed.

  9. Quasiparticle Approach to Molecules Interacting with Quantum Solvents.

    Science.gov (United States)

    Lemeshko, Mikhail

    2017-03-03

    Understanding the behavior of molecules interacting with superfluid helium represents a formidable challenge and, in general, requires approaches relying on large-scale numerical simulations. Here, we demonstrate that experimental data collected over the last 20 years provide evidence that molecules immersed in superfluid helium form recently predicted angulon quasiparticles [Phys. Rev. Lett. 114, 203001 (2015)PRLTAO0031-900710.1103/PhysRevLett.114.203001]. Most important, casting the many-body problem in terms of angulons amounts to a drastic simplification and yields effective molecular moments of inertia as straightforward analytic solutions of a simple microscopic Hamiltonian. The outcome of the angulon theory is in good agreement with experiment for a broad range of molecular impurities, from heavy to medium-mass to light species. These results pave the way to understanding molecular rotation in liquid and crystalline phases in terms of the angulon quasiparticle.

  10. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  11. A comprehensive dynamic modeling approach for giant magnetostrictive material actuators

    International Nuclear Information System (INIS)

    Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi

    2013-01-01

    In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)

  12. An integrated approach for integrated intelligent instrumentation and control system (I{sup 3}CS)

    Energy Technology Data Exchange (ETDEWEB)

    Jung, C H; Kim, J T; Kwon, K C [Korea Atomic Energy Research Inst., Yusong, Taejon (Korea, Republic of)

    1997-07-01

    Nuclear power plants to guarantee the safety of public should be designed to reduce the operator intervention resulting in operating human errors, identify the process states in transients, and aid to make a decision of their tasks and guide operator actions. For the sake of this purpose, MMIS(MAN-Machine Interface System) in NPPs should be the integrated top-down approach tightly focused on the function-based task analysis including an advanced digital technology, an operator support function, and so on. The advanced I and C research team in KAERI has embarked on developing an Integrated Intelligent Instrumentation and Control System (I{sup 3}CS) for Korea`s next generation nuclear power plants. I{sup 3}CS bases the integrated top-down approach on the function-based task analysis, modern digital technology, standardization and simplification, availability and reliability, and protection of investment. (author). 4 refs, 6 figs.

  13. Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior

    Science.gov (United States)

    Lynch, Annette; Fleming, Wm. Michael

    2005-01-01

    Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…

  14. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  15. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  16. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  17. Repetitive Identification of Structural Systems Using a Nonlinear Model Parameter Refinement Approach

    Directory of Open Access Journals (Sweden)

    Jeng-Wen Lin

    2009-01-01

    Full Text Available This paper proposes a statistical confidence interval based nonlinear model parameter refinement approach for the health monitoring of structural systems subjected to seismic excitations. The developed model refinement approach uses the 95% confidence interval of the estimated structural parameters to determine their statistical significance in a least-squares regression setting. When the parameters' confidence interval covers the zero value, it is statistically sustainable to truncate such parameters. The remaining parameters will repetitively undergo such parameter sifting process for model refinement until all the parameters' statistical significance cannot be further improved. This newly developed model refinement approach is implemented for the series models of multivariable polynomial expansions: the linear, the Taylor series, and the power series model, leading to a more accurate identification as well as a more controllable design for system vibration control. Because the statistical regression based model refinement approach is intrinsically used to process a “batch” of data and obtain an ensemble average estimation such as the structural stiffness, the Kalman filter and one of its extended versions is introduced to the refined power series model for structural health monitoring.

  18. Towards a Semantic E-Learning Theory by Using a Modelling Approach

    Science.gov (United States)

    Yli-Luoma, Pertti V. J.; Naeve, Ambjorn

    2006-01-01

    In the present study, a semantic perspective on e-learning theory is advanced and a modelling approach is used. This modelling approach towards the new learning theory is based on the four SECI phases of knowledge conversion: Socialisation, Externalisation, Combination and Internalisation, introduced by Nonaka in 1994, and involving two levels of…

  19. Transverse momentum correlations of quarks in recursive jet models

    Science.gov (United States)

    Artru, X.; Belghobsi, Z.; Redouane-Salah, E.

    2016-08-01

    In the symmetric string fragmentation recipe adopted by PYTHIA for jet simulations, the transverse momenta of successive quarks are uncorrelated. This is a simplification but has no theoretical basis. Transverse momentum correlations are naturally expected, for instance, in a covariant multiperipheral model of quark hadronization. We propose a simple recipe of string fragmentation which leads to such correlations. The definition of the jet axis and its relation with the primordial transverse momentum of the quark is also discussed.

  20. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  1. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology.......The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  2. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  3. A simplified model for tritium permeation transient predictions when trapping is active

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1994-01-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement. ((orig.))

  4. A simplified model for tritium permeation transient predictions when trapping is active

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R. (Fusion Safety Program, Idaho National Engineering Laboratory, P.O. Box 1625, Idaho Falls, ID 83415 (United States))

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement. ((orig.))

  5. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  6. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  7. Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario

    Science.gov (United States)

    Tobias, Guillermo; Jesús García, Adrián

    2016-04-01

    The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of

  8. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  9. Modelling of the knowledge for monitoring expert systems in nuclear power plant safety

    International Nuclear Information System (INIS)

    Machado, Liana; Schirru, Roberto; Martinez, Aquilino S.

    1997-01-01

    Safety operation support systems for NPP faced problems of difficult solutions along their development. This work presents possible solution to such problems, and contribute to enhance the reliability and performance of such system using Artificial Intelligence. Knowledge representation is capital in this work since it express the dependence on variables in a rather natural way. therefore, it makes intrinsic the concepts of synchronism and concurrence in real-time approach. Other advantages are easy V and V processes and simplification of the system maintenance procedures. The inference process is carried out through the rules that are generated from knowledge base. These rules are charged following a conflict resolution optimized for time-real approach. The real application used to validate the model efficiency, consists in part of SICA (Integrated System of the Angra-1 Computers). The application results revealed very positive reducing the quantity of the SICA conventional software code programming. As far the system performance. the knowledge structures and the conflict resolution strategy adopted allowed for guarantee not only the time control for inference, but also a response time compatible with that requested for power plant safety support. (author) 12 refs., 4 figs

  10. Numerical modeling of hydrodynamics and sediment transport—an integrated approach

    Science.gov (United States)

    Gic-Grusza, Gabriela; Dudkowska, Aleksandra

    2017-10-01

    Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind-wave fields, whereas wave-induced currents were calculated using the Kołodko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.

  11. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  12. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  13. A single-equation study of US petroleum consumption: The role of model specificiation

    International Nuclear Information System (INIS)

    Jones, C.T.

    1993-01-01

    The price responsiveness of US petroleum consumption began to attract a great deal of attention following the unexpected and substantial oil price increases of 1973-74. There have been a number of large, multi-equation econometric studies of US energy demand since then which have focused primarily on estimating short run and long run price and income elasticities of individual energy resources (coal, oil, natural gas ampersand electricity) for various consumer sectors (residential, industrial, commercial). Following these early multi-equation studies there have been several single-equation studies of aggregate US petroleum consumption. When choosing an economic model specification for a single-equation study of aggregate US petroleum consumption, an easily estimated model that will provide unbiased price and income elasticity estimates and yield accurate forecasts is needed. Using Hendry's general-to-simple specification search technique and annual data to obtain a restricted, data-acceptable simplification of a general ADL model yielded GNP and short run price elasticities near the consensus estimates, but a long run price elasticity substantially smaller than existing estimates. Comparisons with three other seemingly acceptable simple-to-general models showed that popular model specifications often involve untested, unacceptable parameter restrictions. These models may also demonstrate poorer forecasting performance. Based on results, the general-to-simple approach appears to offer a more accurate methodology for generating superior forecast models of petroleum consumption and other energy use patterns

  14. A mathematical look at a physical power prediction model

    DEFF Research Database (Denmark)

    Landberg, L.

    1998-01-01

    This article takes a mathematical look at a physical model used to predict the power produced from wind farms. The reason is to see whether simple mathematical expressions can replace the original equations and to give guidelines as to where simplifications can be made and where they cannot....... The article shows that there is a linear dependence between the geostrophic wind and the local wind at the surface, but also that great care must be taken in the selection of the simple mathematical models, since physical dependences play a very important role, e.g. through the dependence of the turning...

  15. Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control

    KAUST Repository

    Elmetennani, Shahrazed

    2014-07-01

    This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.

  16. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  17. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    Science.gov (United States)

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  18. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  19. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  20. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  1. A survey on computational intelligence approaches for predictive modeling in prostate cancer

    OpenAIRE

    Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG

    2017-01-01

    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...

  2. A Probabilistic Graphical Model to Detect Chromosomal Domains

    Science.gov (United States)

    Heermann, Dieter; Hofmann, Andreas; Weber, Eva

    To understand the nature of a cell, one needs to understand the structure of its genome. For this purpose, experimental techniques such as Hi-C detecting chromosomal contacts are used to probe the three-dimensional genomic structure. These experiments yield topological information, consistently showing a hierarchical subdivision of the genome into self-interacting domains across many organisms. Current methods for detecting these domains using the Hi-C contact matrix, i.e. a doubly-stochastic matrix, are mostly based on the assumption that the domains are distinct, thus non-overlapping. For overcoming this simplification and for being able to unravel a possible nested domain structure, we developed a probabilistic graphical model that makes no a priori assumptions on the domain structure. Within this approach, the Hi-C contact matrix is analyzed using an Ising like probabilistic graphical model whose coupling constant is proportional to each lattice point (entry in the contact matrix). The results show clear boundaries between identified domains and the background. These domain boundaries are dependent on the coupling constant, so that one matrix yields several clusters of different sizes, which show the self-interaction of the genome on different scales. This work was supported by a Grant from the International Human Frontier Science Program Organization (RGP0014/2014).

  3. A modelling approach for improved implementation of information technology in manufacturing systems

    DEFF Research Database (Denmark)

    Larsen, Michael Holm; Langer, Gilad; Kirkby, Lars Phillip

    2000-01-01

    concept into practice. The paper demonstrates the use of the approach in a practical case, which involves modelling of the shop floor activities and control system at the aluminium parts production at a Danish manufacturer of state-of-the-art audio-video equipment and telephones.......The paper presents a modelling approach, which is based on the multiple view perspective of Soft Systems Methodology and an encapsulation of these perspectives into an object orientated model. The approach provides a structured procedure for putting theoretical abstractions of a new production...

  4. Evaluation of various modelling approaches in flood routing simulation and flood area mapping

    Science.gov (United States)

    Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe

    2016-04-01

    An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.

  5. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  6. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  7. A generalized approach for historical mock-up acquisition and data modelling: Towards historically enriched 3D city models

    Science.gov (United States)

    Hervy, B.; Billen, R.; Laroche, F.; Carré, C.; Servières, M.; Van Ruymbeke, M.; Tourre, V.; Delfosse, V.; Kerouanton, J.-L.

    2012-10-01

    Museums are filled with hidden secrets. One of those secrets lies behind historical mock-ups whose signification goes far behind a simple representation of a city. We face the challenge of designing, storing and showing knowledge related to these mock-ups in order to explain their historical value. Over the last few years, several mock-up digitalisation projects have been realised. Two of them, Nantes 1900 and Virtual Leodium, propose innovative approaches that present a lot of similarities. This paper presents a framework to go one step further by analysing their data modelling processes and extracting what could be a generalized approach to build a numerical mock-up and the knowledge database associated. Geometry modelling and knowledge modelling influence each other and are conducted in a parallel process. Our generalized approach describes a global overview of what can be a data modelling process. Our next goal is obviously to apply this global approach on other historical mock-up, but we also think about applying it to other 3D objects that need to embed semantic data, and approaching historically enriched 3D city models.

  8. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  9. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    Science.gov (United States)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  10. A distributed delay approach for modeling delayed outcomes in pharmacokinetics and pharmacodynamics studies.

    Science.gov (United States)

    Hu, Shuhua; Dunlavey, Michael; Guzy, Serge; Teuscher, Nathan

    2018-04-01

    A distributed delay approach was proposed in this paper to model delayed outcomes in pharmacokinetics and pharmacodynamics studies. This approach was shown to be general enough to incorporate a wide array of pharmacokinetic and pharmacodynamic models as special cases including transit compartment models, effect compartment models, typical absorption models (either zero-order or first-order absorption), and a number of atypical (or irregular) absorption models (e.g., parallel first-order, mixed first-order and zero-order, inverse Gaussian, and Weibull absorption models). Real-life examples were given to demonstrate how to implement distributed delays in Phoenix ® NLME™ 8.0, and to numerically show the advantages of the distributed delay approach over the traditional methods.

  11. Provincial-level spatial statistical modelling of the change in per capita disposable Family Income in Spain, 1975-1983

    Directory of Open Access Journals (Sweden)

    Daniel A. Griffith

    1998-02-01

    Full Text Available Computational simplifications for a space-time autoregressive response model specification are explored for the change in Spain's per capita disposable family income between 1975 and 1983. The geographic resolution for this analysis is the provincial partitioning of part of the Iberian peninsula into Spain's 47 coterminous provinces coupled with its 3 island clusters provinces. In keeping with the Paelinckian tradition of spatial econometrics, exploration focuses on both new spatial econometric estimators and model specifications that emphasize the capturing of spatial dependency effects in the mean response term. One goal of this analysis is to differentiate between spatial, temporal, and space-time interaction information contained in the per capita disposable family income data. A second objective of the application is to illustrate the utility of extending computational simplifications from the spatial to the space-time domain. And a third purpose is to gain some substantive insights into the economic development of one country in a changing Europe. A serendipitous outcome of this investigation is a detailed analysis of locational information latent in Spain's regionally disaggregated per capita disposable family income.

  12. A novel approach for modelling complex maintenance systems using discrete event simulation

    International Nuclear Information System (INIS)

    Alrabghi, Abdullah; Tiwari, Ashutosh

    2016-01-01

    Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.

  13. A variational approach to chiral quark models

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.

    1987-01-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)

  14. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  15. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  16. Modeling and simulation of pressurizer dynamic process in PWR nuclear power plant

    International Nuclear Information System (INIS)

    Ma Jin; Liu Changliang; Li Shu'na

    2010-01-01

    By analysis of the actual operating characteristics of pressurizer in pressurized water reactor (PWR) nuclear power plant and based on some reasonable simplification and basic assumptions, the quality and energy conservation equations about pressurizer' s steam zone and the liquid zone are set up. The purpose of this paper is to build a pressurizer model of two imbalance districts. Water level and pressure control system of pressurizer is formed though model encapsulation. Dynamic simulation curves of main parameters are also shown. At last, comparisons between the theoretical analysis and simulation results show that the pressurizer model of two imbalance districts is reasonable. (authors)

  17. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  18. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  19. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  20. Physics-based distributed snow models in the operational arena: Current and future challenges

    Science.gov (United States)

    Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.

    2017-12-01

    The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.

  1. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  2. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  3. a Geometric Processing Workflow for Transforming Reality-Based 3d Models in Volumetric Meshes Suitable for Fea

    Science.gov (United States)

    Gonizzi Barsanti, S.; Guidi, G.

    2017-02-01

    Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.

  4. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  5. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  6. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  7. Policy harmonized approach for the EU agricultural sector modelling

    Directory of Open Access Journals (Sweden)

    G. SALPUTRA

    2008-12-01

    Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed – full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how “common” policies are implemented within analytical models. In the second section of the paper the AGMEMOD model’s structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the paper’s penultimate section, while section 5 concludes.;

  8. The world state of orbital debris measurements and modeling

    Science.gov (United States)

    Johnson, Nicholas L.

    2004-02-01

    For more than 20 years orbital debris research around the world has been striving to obtain a sharper, more comprehensive picture of the near-Earth artificial satellite environment. Whereas significant progress has been achieved through better organized and funded programs and with the assistance of advancing technologies in both space surveillance sensors and computational capabilities, the potential of measurements and modeling of orbital debris has yet to be realized. Greater emphasis on a systems-level approach to the characterization and projection of the orbital debris environment would prove beneficial. On-going space surveillance activities, primarily from terrestrial-based facilities, are narrowing the uncertainties of the orbital debris population for objects greater than 2 mm in LEO and offer a better understanding of the GEO regime down to 10 cm diameter objects. In situ data collected in LEO is limited to a narrow range of altitudes and should be employed with great care. Orbital debris modeling efforts should place high priority on improving model fidelity, on clearly and completely delineating assumptions and simplifications, and on more thorough sensitivity studies. Most importantly, however, greater communications and cooperation between the measurements and modeling communities are essential for the efficient advancement of the field. The advent of the Inter-Agency Space Debris Coordination Committee (IADC) in 1993 has facilitated this exchange of data and modeling techniques. A joint goal of these communities should be the identification of new sources of orbital debris.

  9. Description of the power plant model BWR-plasim outlined for the Barsebaeck 2 plant

    International Nuclear Information System (INIS)

    Christensen, P. la Cour.

    1979-08-01

    A description is given of a BWR power plant model outlined for the Barsebaeck 2 plant with data placed at our disposal by the Swedish Power Company Sydkraft A/B. The basic operations are derived and simplifications discussed. The model is implemented with a simulation system DYSYS which assures reliable solutions and easy programming. Emphasis has been placed on the models versatility and flexibility so new features are easy to incorporate. The model may be used for transient calculations for both normal plant conditions and for abnormal occurences as well as for control system studies. (author)

  10. Φ -Ψ model for electrodynamics in dielectric media: exact quantisation in the Heisenberg representation

    Energy Technology Data Exchange (ETDEWEB)

    Belgiorno, Francesco [Politecnico di Milano, Dipartimento di Matematica, Milano (Italy); INdAM-GNFM, Milano (Italy); Cacciatori, Sergio L. [Universita dell' Insubria, Department of Science and High Technology, Como (Italy); INFN sezione di Milano, Milano (Italy); Dalla Piazza, Francesco [Universita ' ' La Sapienza' ' , Dipartimento di Matematica, Roma (Italy); Doronzo, Michele [Universita dell' Insubria, Department of Science and High Technology, Como (Italy)

    2016-06-15

    We investigate the quantisation in the Heisenberg representation of a model which represents a simplification of the Hopfield model for dielectric media, where the electromagnetic field is replaced by a scalar field φ and the role of the polarisation field is played by a further scalar field ψ. The model, which is quadratic in the fields, is still characterised by a non-trivial physical content, as the physical particles correspond to the polaritons of the standard Hopfield model of condensed matter physics. Causality is also taken into account and a discussion of the standard interaction representation is also considered. (orig.)

  11. A simplified model for tritium permeation transient predictions when trapping is active*1

    Science.gov (United States)

    Longhurst, G. R.

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.

  12. Initial assessment of a multi-model approach to spring flood forecasting in Sweden

    Science.gov (United States)

    Olsson, J.; Uvo, C. B.; Foster, K.; Yang, W.

    2015-06-01

    Hydropower is a major energy source in Sweden and proper reservoir management prior to the spring flood onset is crucial for optimal production. This requires useful forecasts of the accumulated discharge in the spring flood period (i.e. the spring-flood volume, SFV). Today's SFV forecasts are generated using a model-based climatological ensemble approach, where time series of precipitation and temperature from historical years are used to force a calibrated and initialised set-up of the HBV model. In this study, a number of new approaches to spring flood forecasting, that reflect the latest developments with respect to analysis and modelling on seasonal time scales, are presented and evaluated. Three main approaches, represented by specific methods, are evaluated in SFV hindcasts for three main Swedish rivers over a 10-year period with lead times between 0 and 4 months. In the first approach, historically analogue years with respect to the climate in the period preceding the spring flood are identified and used to compose a reduced ensemble. In the second, seasonal meteorological ensemble forecasts are used to drive the HBV model over the spring flood period. In the third approach, statistical relationships between SFV and the large-sale atmospheric circulation are used to build forecast models. None of the new approaches consistently outperform the climatological ensemble approach, but for specific locations and lead times improvements of 20-30 % are found. When combining all forecasts in a weighted multi-model approach, a mean improvement over all locations and lead times of nearly 10 % was indicated. This demonstrates the potential of the approach and further development and optimisation into an operational system is ongoing.

  13. Model-independent approach for dark matter phenomenology

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  14. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...

  15. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  16. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  17. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  18. Development of flexible process-centric web applications: An integrated model driven approach

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    In recent years, Model Driven Engineering (MDE) approaches have been proposed and used to develop and evolve WAs. However, the definition of appropriate MDE approaches for the development of flexible process-centric WAs is still limited. In particular, (flexible) workflow models have never been

  19. A Modelling Approach for Improved Implementation of Information Technology in Manufacturing Systems

    DEFF Research Database (Denmark)

    Langer, Gilad; Larsen, Michael Holm; Kirkby, Lars Phillip

    1997-01-01

    The paper presents a modelling approach which is based on the multiple view perspective of Soft Systems Methodology and an encapsulation of these perspectives into an object orientated model. The approach provide a structured procedure for putting theoretical abstractions of a new production conc...

  20. Mechatronics by bond graphs an object-oriented approach to modelling and simulation

    CERN Document Server

    Damić, Vjekoslav

    2015-01-01

    This book presents a computer-aided approach to the design of mechatronic systems. Its subject is an integrated modeling and simulation in a visual computer environment. Since the first edition, the simulation software changed enormously, became more user-friendly and easier to use. Therefore, a second edition became necessary taking these improvements into account. The modeling is based on system top-down and bottom-up approach. The mathematical models are generated in a form of differential-algebraic equations and solved using numerical and symbolic algebra methods. The integrated approach developed is applied to mechanical, electrical and control systems, multibody dynamics, and continuous systems. .