WorldWideScience

Sample records for model simplification process

  1. SAHM - Simplification of one-dimensional hydraulic networks by automated processes evaluated on 1D/2D deterministic flood models

    DEFF Research Database (Denmark)

    Löwe, Roland; Davidsen, Steffen; Thrysøe, Cecilie

    We present an algorithm for automated simplification of 1D pipe network models. The impact of the simplifications on the flooding simulated by coupled 1D-2D models is evaluated in an Australian case study. Significant reductions of the simulation time of the coupled model are achieved by reducing...... the 1D network model. The simplifications lead to an underestimation of flooded area because interaction points between network and surface are removed and because water is transported downstream faster. These effects can be mitigated by maintaining nodes in flood-prone areas in the simplification...... and by adjusting pipe roughness to increase transport times....

  2. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification

    Directory of Open Access Journals (Sweden)

    Richard J Allen

    2017-03-01

    Full Text Available Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing ‘transfer function’. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate of the pathway as a whole.

  3. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification.

    Science.gov (United States)

    Allen, Richard J; Musante, Cynthia J

    2017-01-01

    Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing 'transfer function'. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate) of the pathway as a whole.

  4. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  5. Hybrid stochastic simplifications for multiscale gene networks

    Directory of Open Access Journals (Sweden)

    Debussche Arnaud

    2009-09-01

    Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  6. Surface Simplification of 3D Animation Models Using Robust Homogeneous Coordinate Transformation

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2014-01-01

    Full Text Available The goal of 3D surface simplification is to reduce the storage cost of 3D models. A 3D animation model typically consists of several 3D models. Therefore, to ensure that animation models are realistic, numerous triangles are often required. However, animation models that have a high storage cost have a substantial computational cost. Hence, surface simplification methods are adopted to reduce the number of triangles and computational cost of 3D models. Quadric error metrics (QEM has recently been identified as one of the most effective methods for simplifying static models. To simplify animation models by using QEM, Mohr and Gleicher summed the QEM of all frames. However, homogeneous coordinate problems cannot be considered completely by using QEM. To resolve this problem, this paper proposes a robust homogeneous coordinate transformation that improves the animation simplification method proposed by Mohr and Gleicher. In this study, the root mean square errors of the proposed method were compared with those of the method proposed by Mohr and Gleicher, and the experimental results indicated that the proposed approach can preserve more contour features than Mohr’s method can at the same simplification ratio.

  7. Simplification of one-dimensional hydraulic networks by automated processes evaluated on 1D/2D deterministic flood models

    DEFF Research Database (Denmark)

    Davidsen, Steffen; Löwe, Roland; Thrysøe, Cecilie

    2017-01-01

    Evaluation of pluvial flood risk is often based on computations using 1D/2D urban flood models. However, guidelines on choice of model complexity are missing, especially for one-dimensional (1D) network models. This study presents a new automatic approach for simplification of 1D hydraulic networ...

  8. Electric Power Distribution System Model Simplification Using Segment Substitution

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2018-05-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  9. Electric Power Distribution System Model Simplification Using Segment Substitution

    International Nuclear Information System (INIS)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2017-01-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  10. An Agent Based Collaborative Simplification of 3D Mesh Model

    Science.gov (United States)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  11. Towards simplification of hydrologic modeling: Identification of dominant processes

    Science.gov (United States)

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  12. Towards simplification of hydrologic modeling: identification of dominant processes

    Directory of Open Access Journals (Sweden)

    S. L. Markstrom

    2016-11-01

    Full Text Available parameter hydrologic model, has been applied to the conterminous US (CONUS. Parameter sensitivity analysis was used to identify: (1 the sensitive input parameters and (2 particular model output variables that could be associated with the dominant hydrologic process(es. Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff and model performance statistic (mean, coefficient of variation, and autoregressive lag 1. Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1 the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2 the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3 different processes require different numbers of parameters for simulation, and (4 some sensitive parameters influence only one hydrologic process, while others may influence many.

  13. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  14. A New Approach to Line Simplification Based on Image Processing: A Case Study of Water Area Boundaries

    Directory of Open Access Journals (Sweden)

    Yilang Shen

    2018-01-01

    Full Text Available Line simplification is an important component of map generalization. In recent years, algorithms for line simplification have been widely researched, and most of them are based on vector data. However, with the increasing development of computer vision, analysing and processing information from unstructured image data is both meaningful and challenging. Therefore, in this paper, we present a new line simplification approach based on image processing (BIP, which is specifically designed for raster data. First, the key corner points on a multi-scale image feature are detected and treated as candidate points. Then, to capture the essence of the shape within a given boundary using the fewest possible segments, the minimum-perimeter polygon (MPP is calculated and the points of the MPP are defined as the approximate feature points. Finally, the points after simplification are selected from the candidate points by comparing the distances between the candidate points and the approximate feature points. An empirical example was used to test the applicability of the proposed method. The results showed that (1 when the key corner points are detected based on a multi-scale image feature, the local features of the line can be extracted and retained and the positional accuracy of the proposed method can be maintained well; and (2 by defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits.

  15. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  16. Infrastructure Area Simplification Plan

    CERN Document Server

    Field, L.

    2011-01-01

    The infrastructure area simplification plan was presented at the 3rd EMI All Hands Meeting in Padova. This plan only affects the information and accounting systems as the other areas are new in EMI and hence do not require simplification.

  17. Homotopic Polygonal Line Simplification

    DEFF Research Database (Denmark)

    Deleuran, Lasse Kosetski

    This thesis presents three contributions to the area of polygonal line simplification, or simply line simplification. A polygonal path, or simply a path is a list of points with line segments between the points. A path can be simplified by morphing it in order to minimize some objective function...

  18. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  19. SIMPLIFICATION IN CHILD LANGUAGE IN BAHASA INDONESIA: A CASE STUDY ON FILIP

    Directory of Open Access Journals (Sweden)

    Julia Eka Rini

    2000-01-01

    Full Text Available This article aims at giving examples of characteristics of simplification in Bahasa Indonesia and proving that child language has a pattern and that there is a process in learning. Since this is a case study, it might not be enough to say that simplification is universal for all children of any mother tongues, but at least there is a proof that such patterns of simplification also occur in Bahasa Indonesia.

  20. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications

    International Nuclear Information System (INIS)

    Ericsson, Lars O.; Holmen, Johan

    2010-12-01

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  1. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz

    2015-08-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  2. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields.

    Science.gov (United States)

    Skraba, Primoz; Bei Wang; Guoning Chen; Rosen, Paul

    2015-08-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  3. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz; Wang, Bei; Chen, Guoning; Rosen, Paul

    2015-01-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  4. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  5. Complexity and simplification in understanding recruitment in benthic populations

    KAUST Repository

    Pineda, Jesú s; Reyns, Nathalie B.; Starczak, Victoria R.

    2008-01-01

    reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often

  6. Influence of the degree of simplification of the two-phase hydrodynamic model on the simulated behaviour dynamics of a steam generator

    International Nuclear Information System (INIS)

    Dupont, J.F.

    1979-03-01

    The principal simplifications of a mathematical model for the simulation of behaviour dynamics of a two-phase flow with heat exchange are examined, as it appears in a steam generator. The theoretical considerations and numerical solutions permit the evaluation of the validity limits and the influence of these simplifications on the results. (G.T.H.)

  7. Use of process indices for simplification of the description of vapor deposition systems

    International Nuclear Information System (INIS)

    Kajikawa, Yuya; Noda, Suguru; Komiyama, Hiroshi

    2004-01-01

    Vapor deposition is a complex process, including gas-phase, surface, and solid-phase phenomena. Because of the complexity of chemical and physical processes occurring in vapor deposition processes, it is difficult to form a comprehensive, fundamental understanding of vapor deposition and to control such systems for obtaining desirable structures and performance. To overcome this difficulty, we present a method for simplifying the complex description of such systems. One simplification method is to separate complex systems into multiple elements, and determine which of these are important elements. We call this method abridgement. The abridgement method retains only the dominant processes in a description of the system, and discards the others. Abridgement can be achieved by using process indices to evaluate the relative importance of the elementary processes. We describe the formulation and use of these process indices through examples of the growth of continuous films, initial deposition processes, and the formation of the preferred orientation of polycrystalline films. In this paper, we propose a method for representing complex vapor deposition processes as a set of simpler processes

  8. Work Simplification

    Science.gov (United States)

    Ross, Lynne

    1970-01-01

    Excerpts from a talk by Mrs. Ross at the 23rd annual convention of the American School Food Service Association in Detroit, August 5, 1969. A book on work simplification by Mrs. Ross will be available in June from the Iowa State University Press, Ames, Iowa. (Editor)

  9. Simplification: A Viewpoint in Outline. Appendix.

    Science.gov (United States)

    Tickoo, Makhan L.

    This essay examines language simplification for second language learners as a linguistic and a pedagogic phenomenon, posing questions for further study by considering past research. It discusses linguistic simplification (LS) in relation to the development of artificial languages, such as Esperanto, "pidgin" languages, Basic English,…

  10. Phonological simplifications, apraxia of speech and the interaction between phonological and phonetic processing.

    Science.gov (United States)

    Galluzzi, Claudia; Bureca, Ivana; Guariglia, Cecilia; Romani, Cristina

    2015-05-01

    Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels - but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  12. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  13. Simplifications of rational matrices by using UML

    OpenAIRE

    Tasić, Milan B.; Stanimirović, Ivan P.

    2013-01-01

    The simplification process on rational matrices consists of simplifying each entry represented by a rational function. We follow the classic approach of dividing the numerator and denominator polynomials by their common GCD polynomial, and provide the activity diagram in UML for this process. A rational matrix representation as the quotient of a polynomial matrix and a polynomial is also discussed here and illustrated via activity diagrams. Also, a class diagram giving the links between the c...

  14. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  15. Organisational simplification and secondary complexity in health services for adults with learning disabilities.

    Science.gov (United States)

    Heyman, Bob; Swain, John; Gillman, Maureen

    2004-01-01

    This paper explores the role of complexity and simplification in the delivery of health care for adults with learning disabilities, drawing upon qualitative data obtained in a study carried out in NE England. It is argued that the requirement to manage complex health needs with limited resources causes service providers to simplify, standardise and routinise care. Simplified service models may work well enough for the majority of clients, but can impede recognition of the needs of those whose characteristics are not congruent with an adopted model. The data were analysed in relation to the core category, identified through thematic analysis, of secondary complexity arising from organisational simplification. Organisational simplification generates secondary complexity when operational routines designed to make health complexity manageable cannot accommodate the needs of non-standard service users. Associated themes, namely the social context of services, power and control, communication skills, expertise and service inclusiveness and evaluation are explored in relation to the core category. The concept of secondary complexity resulting from organisational simplification may partly explain seemingly irrational health service provider behaviour.

  16. Modeling as a Decision-Making Process

    Science.gov (United States)

    Bleiler-Baxter, Sarah K.; Stephens, D. Christopher; Baxter, Wesley A.; Barlow, Angela T.

    2017-01-01

    The goal in this article is to support teachers in better understanding what it means to model with mathematics by focusing on three key decision-making processes: Simplification, Relationship Mapping, and Situation Analysis. The authors use the Theme Park task to help teachers develop a vision of how students engage in these three decision-making…

  17. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz; Wang, Bei; Chen, Guoning; Rosen, Paul

    2014-01-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification

  18. Recomputing Causality Assignments on Lumped Process Models When Adding New Simplification Assumptions

    Directory of Open Access Journals (Sweden)

    Antonio Belmonte

    2018-04-01

    Full Text Available This paper presents a new algorithm for the resolution of over-constrained lumped process systems, where partial differential equations of a continuous time and space model of the system are reduced into ordinary differential equations with a finite number of parameters and where the model equations outnumber the unknown model variables. Our proposal is aimed at the study and improvement of the algorithm proposed by Hangos-Szerkenyi-Tuza. This new algorithm improves the computational cost and solves some of the internal problems of the aforementioned algorithm in its original formulation. The proposed algorithm is based on parameter relaxation that can be modified easily. It retains the necessary information of the lumped process system to reduce the time cost after introducing changes during the system formulation. It also allows adjustment of the system formulations that change its differential index between simulations.

  19. Simplifications and Idealizations in High School Physics in Mechanics: A Study of Slovenian Curriculum and Textbooks

    Science.gov (United States)

    Forjan, Matej; Sliško, Josip

    2014-01-01

    This article presents the results of an analysis of three Slovenian textbooks for high school physics, from the point of view of simplifications and idealizations in the field of mechanics. In modeling of physical systems, making simplifications and idealizations is important, since one ignores minor effects and focuses on the most important…

  20. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  1. Extreme Simplification and Rendering of Point Sets using Algebraic Multigrid

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2005-01-01

    We present a novel approach for extreme simplification of point set models in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However efficient, simple primitives are less effective in approximating large surface areas. A large

  2. Impact of pipes networks simplification on water hammer phenomenon

    Indian Academy of Sciences (India)

    Simplification of water supply networks is an indispensible design step to make the original network easier to be analysed. The impact of networks' simplification on water hammer phenomenon is investigated. This study uses two loops network with different diameters, thicknesses, and roughness coefficients. The network is ...

  3. An Integrated Simplification Approach for 3D Buildings with Sloped and Flat Roofs

    Directory of Open Access Journals (Sweden)

    Jinghan Xie

    2016-07-01

    Full Text Available Simplification of three-dimensional (3D buildings is critical to improve the efficiency of visualizing urban environments while ensuring realistic urban scenes. Moreover, it underpins the construction of multi-scale 3D city models (3DCMs which could be applied to study various urban issues. In this paper, we design a generic yet effective approach for simplifying 3D buildings. Instead of relying on both semantic information and geometric information, our approach is based solely on geometric information as many 3D buildings still do not include semantic information. In addition, it provides an integrated means to treat 3D buildings with either sloped or flat roofs. The two case studies, one exploring simplification of individual 3D buildings at varying levels of complexity while the other, investigating the multi-scale simplification of a cityscape, show the effectiveness of our approach.

  4. Viewpoint-Driven Simplification of Plant and Tree Foliage

    Directory of Open Access Journals (Sweden)

    Cristina Gasch

    2018-03-01

    Full Text Available Plants and trees are an essential part of outdoor scenes. They are represented by such a vast number of polygons that performing real-time visualization is still a problem in spite of the advantages of the hardware. Some methods have appeared to solve this drawback based on point- or image-based rendering. However, geometry representation is required in some interactive applications. This work presents a simplification method that deals with the geometry of the foliage, reducing the number of primitives that represent these objects and making their interactive visualization possible. It is based on an image-based simplification that establishes an order of leaf pruning and reduces the complexity of the canopies of trees and plants. The proposed simplification method is viewpoint-driven and uses the mutual information in order to choose the leaf to prune. Moreover, this simplification method avoids the pruned appearance of the tree that is usually produced when a foliage representation is formed by a reduced number of leaves. The error introduced every time a leaf is pruned is compensated for if the size of the nearest leaf is altered to preserve the leafy appearance of the foliage. Results demonstrate the good quality and time performance of the presented work.

  5. 77 FR 66361 - Reserve Requirements of Depository Institutions: Reserves Simplification

    Science.gov (United States)

    2012-11-05

    ... Requirements of Depository Institutions: Reserves Simplification AGENCY: Board of Governors of the Federal... (Reserve Requirements of Depository Institutions) published in the Federal Register on April 12, 2012. The... simplifications related to the administration of reserve requirements: 1. Create a common two-week maintenance...

  6. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  7. Complexity and simplification in understanding recruitment in benthic populations

    KAUST Repository

    Pineda, Jesús

    2008-11-13

    Research of complex systems and problems, entities with many dependencies, is often reductionist. The reductionist approach splits systems or problems into different components, and then addresses these components one by one. This approach has been used in the study of recruitment and population dynamics of marine benthic (bottom-dwelling) species. Another approach examines benthic population dynamics by looking at a small set of processes. This approach is statistical or model-oriented. Simplified approaches identify "macroecological" patterns or attempt to identify and model the essential, "first-order" elements of the system. The complexity of the recruitment and population dynamics problems stems from the number of processes that can potentially influence benthic populations, including (1) larval pool dynamics, (2) larval transport, (3) settlement, and (4) post-settlement biotic and abiotic processes, and larval production. Moreover, these processes are non-linear, some interact, and they may operate on disparate scales. This contribution discusses reductionist and simplified approaches to study benthic recruitment and population dynamics of bottom-dwelling marine invertebrates. We first address complexity in two processes known to influence recruitment, larval transport, and post-settlement survival to reproduction, and discuss the difficulty in understanding recruitment by looking at relevant processes individually and in isolation. We then address the simplified approach, which reduces the number of processes and makes the problem manageable. We discuss how simplifications and "broad-brush first-order approaches" may muddle our understanding of recruitment. Lack of empirical determination of the fundamental processes often results in mistaken inferences, and processes and parameters used in some models can bias our view of processes influencing recruitment. We conclude with a discussion on how to reconcile complex and simplified approaches. Although it

  8. Decomposition and Simplification of Multivariate Data using Pareto Sets.

    Science.gov (United States)

    Huettenberger, Lars; Heine, Christian; Garth, Christoph

    2014-12-01

    Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.

  9. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact,...

  10. 7 CFR 3015.311 - Simplification, consolidation, or substitution of State plans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Simplification, consolidation, or substitution of... (Continued) OFFICE OF THE CHIEF FINANCIAL OFFICER, DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE... Simplification, consolidation, or substitution of State plans. (a) As used in this section: (1) Simplify means...

  11. WORK SIMPLIFICATION FOR PRODUCTIVITY IMPROVEMENT A ...

    African Journals Online (AJOL)

    Mechanical Engineering Department. Addis Ababa University ... press concerning the work simplification techniques state ... encompassing as it does improved labor-management cooperation ... achievement of business aims or a contribution to attaining ..... recommended work methods is done after a 1hrough study and ...

  12. Process optimization of friction stir welding based on thermal models

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup

    2010-01-01

    This thesis investigates how to apply optimization methods to numerical models of a friction stir welding process. The work is intended as a proof-of-concept using different methods that are applicable to models of high complexity, possibly with high computational cost, and without the possibility...... information of the high-fidelity model. The optimization schemes are applied to stationary thermal models of differing complexity of the friction stir welding process. The optimization problems considered are based on optimizing the temperature field in the workpiece by finding optimal translational speed....... Also an optimization problem based on a microstructure model is solved, allowing the hardness distribution in the plate to be optimized. The use of purely thermal models represents a simplification of the real process; nonetheless, it shows the applicability of the optimization methods considered...

  13. The limits of simplification in translated isiZulu health texts | Ndlovu ...

    African Journals Online (AJOL)

    Simplification, defined as the practice of simplifying the language used in translation, is regarded as one of the universal features of translation. This article investigates the limitations of simplification encountered in efforts to make translated isiZulu health texts more accessible to the target readership. The focus is on public ...

  14. Computing Strongly Homotopic Line Simplification in the Plane

    DEFF Research Database (Denmark)

    Daneshpajou, Shervin; Abam, Mohammad; Deleuran, Lasse Kosetski

    We study a variant of the line-simplification problem where we are given a polygonal path P = p1 , p2 , . . . , pn and a set S of m point obstacles in a plane, and the goal is to find the optimal homotopic simplification, that is, a minimum subsequence Q = q1 , q2 , . . . , qk (q1 = p1 and qk = pn...... ) of P defining a polygonal path which approximates P within the given error ε and is homotopic to P . We assume all shortcuts pi,pj whose errors under a distance function F are at most ε can be computed in TF(n) time where TF(n) is polynomial for all widely-used distance functions. We define the new...

  15. Geological heterogeneity: Goal-oriented simplification of structure and characterization needs

    Science.gov (United States)

    Savoy, Heather; Kalbacher, Thomas; Dietrich, Peter; Rubin, Yoram

    2017-11-01

    Geological heterogeneity, i.e. the spatial variability of discrete hydrogeological units, is investigated in an aquifer analog of glacio-fluvial sediments to determine how such a geological structure can be simplified for characterization needs. The aquifer analog consists of ten hydrofacies whereas the scarcity of measurements in typical field studies precludes such detailed spatial models of hydraulic properties. Of particular interest is the role of connectivity of the hydrofacies structure, along with its effect on the connectivity of mass transport, in site characterization for predicting early arrival times. Transport through three realizations of the aquifer analog is modeled with numerical particle tracking to ascertain the fast flow channel through which early arriving particles travel. Three simplification schemes of two-facies models are considered to represent the aquifer analogs, and the velocity within the fast flow channel is used to estimate the apparent hydraulic conductivity of the new facies. The facies models in which the discontinuous patches of high hydraulic conductivity are separated from the rest of the domain yield the closest match in early arrival times compared to the aquifer analog, but assuming a continuous high hydraulic conductivity channel connecting these patches yields underestimated early arrivals times within the range of variability between the realizations, which implies that the three simplification schemes could be advised but pose different implications for field measurement campaigns. Overall, the results suggest that the result of transport connectivity, i.e. early arrival times, within realistic geological heterogeneity can be conserved even when the underlying structural connectivity is modified.

  16. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    , because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...

  17. Modelling and dynamics analysis of heat exchanger as a distributed parameter process

    International Nuclear Information System (INIS)

    Savic, B.; Debeljkovic, D.Lj.

    2004-01-01

    A non-linear and afterwards linearized mathematical model of fuel oil cooling chamber has been developed. This chamber is a part of a recuperative heat exchanger of a tube-in-tube type and of opposite-direction acting, set in a heavy oil fraction discharge tubing. The model is defined as a range of assumptions and simplifications from which energy balance equations under non-stationary operating conditions are derived. The model is in the form of a set of partial differential equations with constant coefficients. Using appropriate numerical simulation of the transfer function, the dynamic of this process has been shown in the form of appropriate transient process responses which quite well correspond to the real process behavior

  18. Modelling and dynamics analysis of heat exchanger as a distributed parameter process

    Energy Technology Data Exchange (ETDEWEB)

    Savic, B.; Debeljkovic, D.Lj. [University of Belgrade, Department of Control Engineering, Belgrade (Yugoslavia)

    2004-07-01

    A non-linear and afterwards linearized mathematical model of fuel oil cooling chamber has been developed. This chamber is a part of a recuperative heat exchanger of a tube-in-tube type and of opposite-direction acting, set in a heavy oil fraction discharge tubing. The model is defined as a range of assumptions and simplifications from which energy balance equations under non-stationary operating conditions are derived. The model is in the form of a set of partial differential equations with constant coefficients. Using appropriate numerical simulation of the transfer function, the dynamic of this process has been shown in the form of appropriate transient process responses which quite well correspond to the real process behavior.

  19. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications; Storregional grundvattenmodellering - en kaenslighetsstudie av naagra utvalda konceptuella beskrivningar och foerenklingar

    Energy Technology Data Exchange (ETDEWEB)

    Ericsson, Lars O. (Lars O. Ericsson Consulting AB (Sweden)); Holmen, Johan (Golder Associates (Sweden))

    2010-12-15

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  20. The Study of Simplification and Explicitation Techniques in Khaled Hosseini's “A Thousand Splendid Suns”

    OpenAIRE

    Reza Kafipour

    2016-01-01

    Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translator...

  1. THE STUDY OF SIMPLIFICATION AND EXPLICITATION TECHNIQUES IN KHALED HOSSEINI'S “A THOUSAND SPLENDID SUNS”

    Directory of Open Access Journals (Sweden)

    Reza Kafipour

    2016-12-01

    Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.

  2. The Study of Simplification and Explicitation Techniques in Khaled Hosseini's “A Thousand Splendid Suns”

    Directory of Open Access Journals (Sweden)

    Reza Kafipour

    2016-12-01

    Full Text Available Teaching and learning strategies help facilitate teaching and learning. Among them, simplification and explicitation strategies are those which help transferring the meaning to the learners and readers of a translated text. The aim of this study was to investigate explicitation and simplification in Persian translation of novel of Khaled Hosseini's “A Thousand Splendid Suns”. The study also attempted to find out frequencies of simplification and explicitation techniques used by the translators in translating the novel. To do so, 359 sentences out of 6000 sentences in original text were selected by systematic random sampling procedure. Then the percentage and total sums of each one of the strategies were calculated. The result showed that both translators used simplification and explicitation techniques significantly in their translation whereas Saadvandian, the first translator, significantly applied more simplification techniques in comparison with Ghabrai, the second translator. However, no significant difference was found between translators in the application of explicitation techniques. The study implies that these two translation strategies were fully familiar for the translators as both translators used them significantly to make the translation more understandable to the readers.

  3. Simplification of a dust emission scheme and comparison with data

    Science.gov (United States)

    Shao, Yaping

    2004-05-01

    A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.

  4. Hydrothermal processing of Hanford tank wastes: Process modeling and control

    International Nuclear Information System (INIS)

    Currier, R.P.

    1994-01-01

    In the Los Alamos National Laboratory (LANL) hydrothermal process, waste streams are first pressurized and heated as they pass through a continuous flow tubular reactor vessel. The waste is maintained at reaction temperature of 300--550 C where organic destruction and sludge reformation occur. This report documents LANL activities in process modeling and control undertaken in FY94 to support hydrothermal process development. Key issues discussed include non-ideal flow patterns (e.g. axial dispersion) and their effect on reactor performance, the use and interpretation of inert tracer experiments, and the use of computational fluid mechanics to evaluate novel hydrothermal reactor designs. In addition, the effects of axial dispersion (and simplifications to rate expressions) on the estimated kinetic parameters are explored by non-linear regression to experimental data. Safety-related calculations are reported which estimate the explosion limits of effluent gases and the fate of hydrogen as it passes through the reactor. Development and numerical solution of a generalized one-dimensional mathematical model is also summarized. The difficulties encountered in using commercially available software to correlate the behavior of high temperature, high pressure aqueous electrolyte mixtures are summarized. Finally, details of the control system and experiments conducted to empirically determine the system response are reported

  5. Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes

    Institute of Scientific and Technical Information of China (English)

    Muhammad Hussain

    2009-01-01

    Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.

  6. Pathways of DNA unlinking: A story of stepwise simplification.

    Science.gov (United States)

    Stolz, Robert; Yoshida, Masaaki; Brasher, Reuben; Flanner, Michelle; Ishihara, Kai; Sherratt, David J; Shimokawa, Koya; Vazquez, Mariel

    2017-09-29

    In Escherichia coli DNA replication yields interlinked chromosomes. Controlling topological changes associated with replication and returning the newly replicated chromosomes to an unlinked monomeric state is essential to cell survival. In the absence of the topoisomerase topoIV, the site-specific recombination complex XerCD- dif-FtsK can remove replication links by local reconnection. We previously showed mathematically that there is a unique minimal pathway of unlinking replication links by reconnection while stepwise reducing the topological complexity. However, the possibility that reconnection preserves or increases topological complexity is biologically plausible. In this case, are there other unlinking pathways? Which is the most probable? We consider these questions in an analytical and numerical study of minimal unlinking pathways. We use a Markov Chain Monte Carlo algorithm with Multiple Markov Chain sampling to model local reconnection on 491 different substrate topologies, 166 knots and 325 links, and distinguish between pathways connecting a total of 881 different topologies. We conclude that the minimal pathway of unlinking replication links that was found under more stringent assumptions is the most probable. We also present exact results on unlinking a 6-crossing replication link. These results point to a general process of topology simplification by local reconnection, with applications going beyond DNA.

  7. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    Science.gov (United States)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  8. THE EFFECT OF TAX SIMPLIFICATION ON TAXPAYERS’ COMPLIANCE BEHAVIOR: RELIGIOSITY AS MODERATING VARIABLE

    Directory of Open Access Journals (Sweden)

    Muslichah Muslichah

    2017-03-01

    Full Text Available Tax compliance was an important issue for nations around the world as governments searched for revenue tomeet public needs. The importance of tax simplification had long been known as a determinant of compliancebehavior and it became an important issue in taxation research. The primary objective of this study was toinvestigate the effect of tax simplification and religiosity on compliance behavior. This study was conducted inMalang, East Java. Survey questionnaires were sent to 200 taxpayers and only 122 responded. Consistentwith the prior research, this study suggested that the effect of religiosity on compliance behavior was positiveand significant. Religiosity acted as moderating role on the relationship between tax simplification andcompliance behavior. This study was contributed to the compliance literature. The present study also providedpractical significance because the empirical result provided information about compliance behavior to helpgovernment to develop strategies toward increasing voluntary compliance.

  9. The cost of policy simplification in conservation incentive programs

    DEFF Research Database (Denmark)

    Armsworth, Paul R.; Acs, Szvetlana; Dallimer, Martin

    2012-01-01

    of biodiversity. Common policy simplifications result in a 49100% loss in biodiversity benefits depending on the conservation target chosen. Failure to differentiate prices for conservation improvements in space is particularly problematic. Additional implementation costs that accompany more complicated policies......Incentive payments to private landowners provide a common strategy to conserve biodiversity and enhance the supply of goods and services from ecosystems. To deliver cost-effective improvements in biodiversity, payment schemes must trade-off inefficiencies that result from over-simplified policies...... with the administrative burden of implementing more complex incentive designs. We examine the effectiveness of different payment schemes using field parameterized, ecological economic models of extensive grazing farms. We focus on profit maximising farm management plans and use bird species as a policy-relevant indicator...

  10. Cutting red tape: national strategies for administrative simplification

    National Research Council Canada - National Science Library

    Cerri, Fabienne; Hepburn, Glen; Barazzoni, Fiorenza

    2006-01-01

    ... when the topic was new, and had a strong focus on the tools used to simplify administrative regulations. Expectations are greater today, and ad hoc simplification initiatives have in many cases been replaced by comprehensive government programmes to reduce red tape. Some instruments, such as one-stop shops, which were new then, have become widely adop...

  11. Simplification of the helical TEN2 laser

    Science.gov (United States)

    Krahn, K.-H.

    1980-04-01

    The observation that the helical TEN2 laser can effectively be simplified by giving up the use of decoupling elements as well as by abolishing the segmentation of the electrode structure is examined. Although, as a consequence of this simplification, the operating pressure range was slightly decreased, the output power could be improved by roughly 30%, a result which is attributed to the new electrode geometry exhibiting lower inductance and lower damping losses.

  12. Sutural simplification in Physodoceratinae (Aspidoceratidae, Ammonitina

    Directory of Open Access Journals (Sweden)

    Checa, A.

    1987-08-01

    Full Text Available The estructural analysis of the shell septum interrelationship in sorne Jurassic ammonites allows us to conclude that sutural simplifications occurred throughout the phylogeny, were originated by alterations in the external morphology of the shell. In the case of Physodoceratinae the simplification observed in the morphology of the septal suture may have a double origin. First, an increase in the size of periumbilical tubercles may determine a shallowing of sutural elements and a shortening of saddle and lobe frilling. In other cases, shallowing is determined by a decrease in the whorl expansion rate, an apparent shortening of secondary branching not being observed.El análisis estructural de la interrelación concha-septo en algunos ammonites del Jurásico superior lleva a concluir que las simplificaciones suturales aparecidas a lo largo de la filogenia fueron originadas por alteraciones ocurridas en la morfología externa de la concha. En el caso concreto de la subfamilia Physodoceratinae, la simplificación observada en la morfología de la sutura puede tener un doble origen. En primer lugar, un incremento en el tamaño de los tubérculos periumbilicales puede determinar una pérdida de profundidad de los elementos de la sutura. siempre acompañada de una disminución en las indentaciones (frilling de sillas y lóbulos. En otros casos el acortamiento en profundidad está determinado por una disminución de la tasa de expansión de la espira, sin que se observe un acortamiento aparente de las ramificaciones secundarias.

  13. Simplifications of the mini nutritional assessment short-form are predictive of mortality among hospitalized young and middle-aged adults.

    Science.gov (United States)

    Asiimwe, Stephen B

    2016-01-01

    Measuring malnutrition in hospitalized patients is difficult in all settings. I evaluated associations of items in the mini nutritional assessment short-form (MNA-sf), a nutritional-risk screening tool previously validated in the elderly, with malnutrition among hospitalized patients in Uganda. I used results to construct two simplifications of this tool that may be applicable to young and middle-aged adults. I assessed the association of each MNA-sf item with the mid-upper arm circumference (MUAC), a specific measure of malnutrition at appropriate cut-offs. I incorporated only malnutrition-specific items into the proposed simplifications scoring each item according to its association with malnutrition. I assessed numbers classified to different score-levels by the simplifications and, via proportional hazards regression, how the simplifications predicted in-hospital mortality. I analyzed 318 patients (median age 37, interquartile range 27 to 56). Variables making it into the simplifications were: reduced food intake, weight loss, mobility, and either BMI in kg/m(2) (categorized as age, sex, and HIV status. The MNA-sf simplifications described may provide an improved measure of malnutrition in hospitalized young and middle-aged adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Quantum copying and simplification of the quantum Fourier transform

    Science.gov (United States)

    Niu, Chi-Sheng

    Theoretical studies of quantum computation and quantum information theory are presented in this thesis. Three topics are considered: simplification of the quantum Fourier transform in Shor's algorithm, optimal eavesdropping in the BB84 quantum cryptographic protocol, and quantum copying of one qubit. The quantum Fourier transform preceding the final measurement in Shor's algorithm is simplified by replacing a network of quantum gates with one that has fewer and simpler gates controlled by classical signals. This simplification results from an analysis of the network using the consistent history approach to quantum mechanics. The optimal amount of information which an eavesdropper can gain, for a given level of noise in the communication channel, is worked out for the BB84 quantum cryptographic protocol. The optimal eavesdropping strategy is expressed in terms of various quantum networks. A consistent history analysis of these networks using two conjugate quantum bases shows how the information gain in one basis influences the noise level in the conjugate basis. The no-cloning property of quantum systems, which is the physics behind quantum cryptography, is studied by considering copying machines that generate two imperfect copies of one qubit. The best qualities these copies can have are worked out with the help of the Bloch sphere representation for one qubit, and a quantum network is worked out for an optimal copying machine. If the copying machine does not have additional ancillary qubits, the copying process can be viewed using a 2-dimensional subspace in a product space of two qubits. A special representation of such a two-dimensional subspace makes possible a complete characterization of this type of copying. This characterization in turn leads to simplified eavesdropping strategies in the BB84 and the B92 quantum cryptographic protocols.

  15. Simplification of Home Cooking and Its Periphery

    OpenAIRE

    小住, フミ子; 北崎, 康子; Fumiko, OZUMI; Yasuko, KITAZAKI

    1997-01-01

    Sence of home cooking has been changing with the times. Various topics, which make us conscious of health and dietary habits, such as delicatessen, half-ready-made foods, eating out, and utilization of home delivery service and food imports are involved in those of simplification of cooking. We requested 64 students to fill in a questionnaire in three parts. The recovery was 96.4%. The results are as follows : The main reason for purchasing delicatessen or half-ready-made foods was that "they...

  16. 77 FR 21846 - Reserve Requirements of Depository Institutions: Reserves Simplification

    Science.gov (United States)

    2012-04-12

    ... Requirements of Depository Institutions: Reserves Simplification AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final rule. SUMMARY: The Board is amending Regulation D, Reserve Requirements of Depository Institutions, to simplify the administration of reserve requirements. The final rule creates a...

  17. THE ELITISM OF LEGAL LANGUAGE AND THE NEED OF SIMPLIFICATION

    Directory of Open Access Journals (Sweden)

    Antonio Escandiel de Souza

    2016-12-01

    Full Text Available This article presents the results of the research project entitled “Simplification of legal language: a study on the view of the academic community of the University of Cruz Alta”. It is a qualitative nature study on simplifying the legal language as a means of democratizing/pluralize access to justice, in the view of scholars and Law Course teachers. There is great difficulty by society in the understanding of legal terms, which hinders access to justice. Similarly, the legal field is not far, of their traditional formalities, which indicates the existence of a parallel where, on one hand, is society, with its problems of understanding, and the other the law, its inherent and intrinsic procedures. However, the company may not have access to the judiciary hampered on account of formalities arising from the law and its flowery language. Preliminary results indicate simplification of legal language as essential to real democratization of access to Law/Justice.

  18. Utilizing 'hot words' in ParaConc to verify lexical simplification ...

    African Journals Online (AJOL)

    Lexical simplification strategies investigated are: using a superordinate or more general word, using a general word with extended meaning and using more familiar or common synonyms. The analysis gives the reader an idea about how some general words are used to translate technical language. It also displays that 'hot ...

  19. A New Algorithm for Cartographic Simplification of Streams and Lakes Using Deviation Angles and Error Bands

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-10-01

    Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.

  20. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  1. Adaptive simplification and the evolution of gecko locomotion: Morphological and biomechanical consequences of losing adhesion

    Science.gov (United States)

    Higham, Timothy E.; Birn-Jeffery, Aleksandra V.; Collins, Clint E.; Hulsey, C. Darrin; Russell, Anthony P.

    2015-01-01

    Innovations permit the diversification of lineages, but they may also impose functional constraints on behaviors such as locomotion. Thus, it is not surprising that secondary simplification of novel locomotory traits has occurred several times among vertebrates and could potentially lead to exceptional divergence when constraints are relaxed. For example, the gecko adhesive system is a remarkable innovation that permits locomotion on surfaces unavailable to other animals, but has been lost or simplified in species that have reverted to a terrestrial lifestyle. We examined the functional and morphological consequences of this adaptive simplification in the Pachydactylus radiation of geckos, which exhibits multiple unambiguous losses or bouts of simplification of the adhesive system. We found that the rates of morphological and 3D locomotor kinematic evolution are elevated in those species that have simplified or lost adhesive capabilities. This finding suggests that the constraints associated with adhesion have been circumvented, permitting these species to either run faster or burrow. The association between a terrestrial lifestyle and the loss/reduction of adhesion suggests a direct link between morphology, biomechanics, and ecology. PMID:25548182

  2. Development of process simulation code for reprocessing plant and process analysis for solvent degradation and solvent washing waste

    International Nuclear Information System (INIS)

    Tsukada, Tsuyoshi; Takahashi, Keiki

    1999-01-01

    We developed a process simulation code for an entire nuclear fuel reprocessing plant. The code can be used on a PC. Almost all of the equipment in the reprocessing plant is included in the code and the mass balance model of each item of equipment is based on the distribution factors of flow-out streams. All models are connected between the outlet flow and the inlet flow according to the process flow sheet. We estimated the amount of DBP from TBP degradation in the entire process by using the developed code. Most of the DBP is generated in the Pu refining process by the effect of α radiation from Pu, which is extracted in a solvent. On the other hand, very little of DBP is generated in the U refining process. We therefore propose simplification of the solvent washing process and volume reduction of the alkali washing waste in the U refining process. The first Japanese commercial reprocessing plant is currently under construction at Rokkasho Mura, Recently, for the sake of process simplification, the original process design has been changed. Using our code, we analyzed the original process and the simplified process. According our results, the volume of alkali waste solution in the low-level liquid treatment process will be reduced by half in the simplified process. (author)

  3. CSP-based chemical kinetics mechanisms simplification strategy for non-premixed combustion: An application to hybrid rocket propulsion

    KAUST Repository

    Ciottoli, Pietro P.

    2017-08-14

    A set of simplified chemical kinetics mechanisms for hybrid rocket applications using gaseous oxygen (GOX) and hydroxyl-terminated polybutadiene (HTPB) is proposed. The starting point is a 561-species, 2538-reactions, detailed chemical kinetics mechanism for hydrocarbon combustion. This mechanism is used for predictions of the oxidation of butadiene, the primary HTPB pyrolysis product. A Computational Singular Perturbation (CSP) based simplification strategy for non-premixed combustion is proposed. The simplification algorithm is fed with the steady-solutions of classical flamelet equations, these being representative of the non-premixed nature of the combustion processes characterizing a hybrid rocket combustion chamber. The adopted flamelet steady-state solutions are obtained employing pure butadiene and gaseous oxygen as fuel and oxidizer boundary conditions, respectively, for a range of imposed values of strain rate and background pressure. Three simplified chemical mechanisms, each comprising less than 20 species, are obtained for three different pressure values, 3, 17, and 36 bar, selected in accordance with an experimental test campaign of lab-scale hybrid rocket static firings. Finally, a comprehensive strategy is shown to provide simplified mechanisms capable of reproducing the main flame features in the whole pressure range considered.

  4. Ecosystem simplification, biodiversity loss and plant virus emergence.

    Science.gov (United States)

    Roossinck, Marilyn J; García-Arenal, Fernando

    2015-02-01

    Plant viruses can emerge into crops from wild plant hosts, or conversely from domestic (crop) plants into wild hosts. Changes in ecosystems, including loss of biodiversity and increases in managed croplands, can impact the emergence of plant virus disease. Although data are limited, in general the loss of biodiversity is thought to contribute to disease emergence. More in-depth studies have been done for human viruses, but studies with plant viruses suggest similar patterns, and indicate that simplification of ecosystems through increased human management may increase the emergence of viral diseases in crops. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Application of a power plant simplification methodology: The example of the condensate feedwater system

    International Nuclear Information System (INIS)

    Seong, P.H.; Manno, V.P.; Golay, M.W.

    1988-01-01

    A novel framework for the systematic simplification of power plant design is described with a focus on the application for the optimization of condensate feedwater system (CFWS) design. The evolution of design complexity of CFWS is reviewed with emphasis upon the underlying optimization process. A new evaluation methodology which includes explicit accounting of human as well as mechanical effects upon system availability is described. The unifying figure of merit for an operating system is taken to be net electricity production cost. The evaluation methodology is applied to the comparative analysis of three designs. In the illustrative examples, the results illustrate how inclusion in the evaluation of explicit availability related costs leads to optimal configurations. These are different from those of current system design practices in that thermodynamic efficiency and capital cost optimization are not overemphasized. Rather a more complete set of design-dependent variables is taken into account, and other important variables which remain neglected in current practices are identified. A critique of the new optimization approach and a discussion of future work areas including improved human performance modeling and different optimization constraints are provided. (orig.)

  6. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  7. New helical-shape magnetic pole design for Magnetic Lead Screw enabling structure simplification

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Xia, Yongming; Wu, Weimin

    2015-01-01

    Magnetic lead screw (MLS) is a new type of high performance linear actuator that is attractive for many potential applications. The main difficulty of the MLS technology lies in the manufacturing of its complicated helical-shape magnetic poles. Structure simplification is, therefore, quite...

  8. The minimum attention plant inherent safety through LWR simplification

    International Nuclear Information System (INIS)

    Turk, R.S.; Matzie, R.A.

    1987-01-01

    The Minimum Attention Plant (MAP) is a unique small LWR that achieves greater inherent safety, improved operability, and reduced costs through design simplification. The MAP is a self-pressurized, indirect-cycle light water reactor with full natural circulation primary coolant flow and multiple once-through steam generators located within the reactor vessel. A fundamental tenent of the MAP design is its complete reliance on existing LWR technology. This reliance on conventional technology provides an extensive experience base which gives confidence in judging the safety and performance aspects of the design

  9. The complexities of HIPAA and administration simplification.

    Science.gov (United States)

    Mozlin, R

    2000-11-01

    The Health Insurance Portability and Accessibility Act (HIPAA) was signed into law in 1996. Although focused on information technology issues, HIPAA will ultimately impact day-to-day operations at multiple levels within any clinical setting. Optometrists must begin to familiarize themselves with HIPAA in order to prepare themselves to practice in a technology-enriched environment. Title II of HIPAA, entitled "Administration Simplification," is intended to reduce the costs and administrative burden of healthcare by standardizing the electronic transmission of administrative and financial transactions. The Department of Health and Human Services is expected to publish the final rules and regulations that will govern HIPAA's implementation this year. The rules and regulations will cover three key aspects of healthcare delivery: electronic data interchange (EDI), security and privacy. EDI will standardize the format for healthcare transactions. Health plans must accept and respond to all transactions in the EDI format. Security refers to policies and procedures that protect the accuracy and integrity of information and limit access. Privacy focuses on how the information is used and disclosure of identifiable health information. Security and privacy regulations apply to all information that is maintained and transmitted in a digital format and require administrative, physical, and technical safeguards. HIPAA will force the healthcare industry to adopt an e-commerce paradigm and provide opportunities to improve patient care processes. Optometrists should take advantage of the opportunity to develop more efficient and profitable practices.

  10. 76 FR 64250 - Reserve Requirements of Depository Institutions: Reserves Simplification and Private Sector...

    Science.gov (United States)

    2011-10-18

    ... Simplification and Private Sector Adjustment Factor AGENCY: Board of Governors of the Federal Reserve System... comment on several issues related to the methodology used for the Private Sector Adjustment Factor that is... Analyst (202) 452- 3674, Division of Monetary Affairs, or, for questions regarding the Private Sector...

  11. Modelling chemical behavior of water reactor fuel

    Energy Technology Data Exchange (ETDEWEB)

    Ball, R G.J.; Hanshaw, J; Mason, P K; Mignanelli, M A [AEA Technology, Harwell (United Kingdom)

    1997-08-01

    For many applications, large computer codes have been developed which use correlation`s, simplifications and approximations in order to describe the complex situations which may occur during the operation of nuclear power plant or during fault scenarios. However, it is important to have a firm physical basis for simplifications and approximations in such codes and, therefore, there has been an emphasis on modelling the behaviour of materials and processes on a more detailed or fundamental basis. The application of fundamental modelling techniques to simulated various chemical phenomena in thermal reactor fuel systems are described in this paper. These methods include thermochemical modelling, kinetic and mass transfer modelling and atomistic simulation and examples of each approach are presented. In each of these applications a summary of the methods are discussed together with the assessment process adopted to provide the fundamental parameters which form the basis of the calculation. (author). 25 refs, 9 figs, 2 tabs.

  12. Perceptual Recovery from Consonant-Cluster Simplification in Korean Using Language-Specific Phonological Knowledge

    NARCIS (Netherlands)

    Cho, T.; McQueen, J.M.

    2011-01-01

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for

  13. System Model of Heat and Mass Transfer Process for Mobile Solvent Vapor Phase Drying Equipment

    Directory of Open Access Journals (Sweden)

    Shiwei Zhang

    2014-01-01

    Full Text Available The solvent vapor phase drying process is one of the most important processes during the production and maintenance for large oil-immersed power transformer. In this paper, the working principle, system composition, and technological process of mobile solvent vapor phase drying (MVPD equipment for transformer are introduced in detail. On the basis of necessary simplification and assumption for MVPD equipment and process, a heat and mass transfer mathematical model including 40 mathematical equations is established, which represents completely thermodynamics laws of phase change and transport process of solvent, water, and air in MVPD technological processes and describes in detail the quantitative relationship among important physical quantities such as temperature, pressure, and flux in key equipment units and process. Taking a practical field drying process of 500 KV/750 MVA power transformer as an example, the simulation calculation of a complete technological process is carried out by programming with MATLAB software and some relation curves of key process parameters changing with time are obtained such as body temperature, tank pressure, and water yield. The change trend of theoretical simulation results is very consistent with the actual production record data which verifies the correctness of mathematical model established.

  14. MODELING OF PROCESSES OF OVERCOMING CONTRADICTIONS OF THE STATE AND ECONOMIC OPERATORS FOR THE SECURITY AND FACILITATION OF CUSTOMS PROCEDURES

    Directory of Open Access Journals (Sweden)

    Berezhnyuk Ivan

    2018-03-01

    Full Text Available Introduction. The issue of simultaneous provision of economic security of the state and simplification of customs procedures is actualized nowadays. The author of the study stressed the importance to create a «safe» business environment from the point of view of the customs sphere, which is based on «security», «justice» and «stability». Purpose. Development of methodical recommendations for modeling the processes of overcoming contradictions of the state and subjects of foreign economic activity in the field of security and simplification of customs procedures. Results. The research indicates that the appointment of revenue and fee bodies is the creation of favorable conditions for the development of foreign economic activity, ensuring the safety of society, protecting the customs interests of Ukraine. When performing customs duties by the SFS, the tasks assigned to them, aimed at ensuring the correct application, strict observance and prevention of non-compliance with the requirements of the Ukrainian legislation on state customs issues, may present risks that are inherently contradictory, conflicting in terms of the vector of action with respect to each other, namely: the probability of non-compliance by the subjects of foreign trade with the norms of customs legislation, or the creation of significant bureaucratic barriers in the process of economic operators. There is a peculiar conflict of interests between the state and the subjects of foreign economic activity. The main direction of creating a favorable business environment in accordance with the recommendations of WCO is the process of further simplification of customs procedures for subjects with a high degree of trust, fighting corruption and facilitating the movement of goods, vehicles and people in general. Conclusions. Thus, the scheme of «relations» between the state and the subjects of foreign economic activity can be modeled by the means of game theory, which is

  15. Analysis of Simplifications Applied in Vibration Damping Modelling for a Passive Car Shock Absorber

    Directory of Open Access Journals (Sweden)

    Łukasz Konieczny

    2016-01-01

    Full Text Available The paper presents results of research on hydraulic automotive shock absorbers. The considerations provided in the paper indicate certain flaws and simplifications resulting from the fact that damping characteristics are assumed as the function of input velocity only, which is the case of simulation studies. An important aspect taken into account when determining parameters of damping performed by car shock absorbers at a testing station is the permissible range of characteristics of a shock absorber of the same type. The aim of this study was to determine the damping characteristics entailing the stroke value. The stroke and rotary velocities were selected in a manner enabling that, for different combinations, the same maximum linear velocity can be obtained. Thus the influence of excitation parameters, such as the stroke value, on force versus displacement and force versus velocity diagrams was determined. The 3D characteristics presented as the damping surface in the stoke and the linear velocity function were determined. An analysis of the results addressed in the paper highlights the impact of such factors on the profile of closed loop graphs of damping forces and point-type damping characteristics.

  16. An integrated computer aided system for integrated design of chemical processes

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Hytoft, Glen; Jaksland, Cecilia

    1997-01-01

    In this paper, an Integrated Computer Aided System (ICAS), which is particularly suitable for solving problems related to integrated design of chemical processes; is presented. ICAS features include a model generator (generation of problem specific models including model simplification and model ...... form the basis for the toolboxes. The available features of ICAS are highlighted through a case study involving the separation of binary azeotropic mixtures. (C) 1997 Elsevier Science Ltd....

  17. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  18. Baseline natural killer and T cell populations correlation with virologic outcome after regimen simplification to atazanavir/ritonavir alone (ACTG 5201.

    Directory of Open Access Journals (Sweden)

    John E McKinnon

    Full Text Available Simplified maintenance therapy with ritonavir-boosted atazanavir (ATV/r provides an alternative treatment option for HIV-1 infection that spares nucleoside analogs (NRTI for future use and decreased toxicity. We hypothesized that the level of immune activation (IA and recovery of lymphocyte populations could influence virologic outcomes after regimen simplification.Thirty-four participants with virologic suppression ≥ 48 weeks on antiretroviral therapy (2 NRTI plus protease inhibitor were switched to ATV/r alone in the context of the ACTG 5201 clinical trial. Flow cytometric analyses were performed on PBMC isolated from 25 patients with available samples, of which 24 had lymphocyte recovery sufficient for this study. Assessments included enumeration of T-cells (CD4/CD8, natural killer (NK (CD3+CD56+CD16+ cells and cell-associated markers (HLA-DR, CD's 38/69/94/95/158/279.Eight of the 24 patients had at least one plasma HIV-1 RNA level (VL >50 copies/mL during the study. NK cell levels below the group median of 7.1% at study entry were associated with development of VL >50 copies/mL following simplification by regression and survival analyses (p = 0.043 and 0.023, with an odds ratio of 10.3 (95% CI: 1.92-55.3. Simplification was associated with transient increases in naïve and CD25+ CD4+ T-cells, and had no impact on IA levels.Lower NK cell levels prior to regimen simplification were predictive of virologic rebound after discontinuation of nucleoside analogs. Regimen simplification did not have a sustained impact on markers of IA or T lymphocyte populations in 48 weeks of clinical monitoring.ClinicalTrials.gov NCT00084019.

  19. Integrating Tax Preparation with FAFSA Completion: Three Case Models

    Science.gov (United States)

    Daun-Barnett, Nathan; Mabry, Beth

    2012-01-01

    This research compares three different models implemented in four cities. The models integrated free tax-preparation services to assist low-income families with their completion of the Free Application for Federal Student Aid (FAFSA). There has been an increased focus on simplifying the FAFSA process. However, simplification is not the only…

  20. Computer modelling for better diagnosis and therapy of patients by cardiac resynchronisation therapy

    NARCIS (Netherlands)

    Pluijmert, Marieke; Lumens, Joost; Potse, Mark; Delhaas, Tammo; Auricchio, Angelo; Prinzen, Frits W

    2015-01-01

    Mathematical or computer models have become increasingly popular in biomedical science. Although they are a simplification of reality, computer models are able to link a multitude of processes to each other. In the fields of cardiac physiology and cardiology, models can be used to describe the

  1. Heat and water transport in soils and across the soil-atmosphere interface: 1. Theory and different model concepts

    DEFF Research Database (Denmark)

    Vanderborght, Jan; Fetzer, Thomas; Mosthaf, Klaus

    2017-01-01

    on a theoretical level by identifying the underlying simplifications that are made for the different compartments of the system: porous medium, free flow and their interface, and by discussing how processes not explicitly considered are parameterized. Simplifications can be grouped into three sets depending......Evaporation is an important component of the soil water balance. It is composed of water flow and transport processes in a porous medium that are coupled with heat fluxes and free air flow. This work provides a comprehensive review of model concepts used in different research fields to describe...

  2. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation.

    Science.gov (United States)

    Langhans, Simone D; Lienert, Judit

    2016-01-01

    River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource

  3. Modeling and simulation of pressurizer dynamic process in PWR nuclear power plant

    International Nuclear Information System (INIS)

    Ma Jin; Liu Changliang; Li Shu'na

    2010-01-01

    By analysis of the actual operating characteristics of pressurizer in pressurized water reactor (PWR) nuclear power plant and based on some reasonable simplification and basic assumptions, the quality and energy conservation equations about pressurizer' s steam zone and the liquid zone are set up. The purpose of this paper is to build a pressurizer model of two imbalance districts. Water level and pressure control system of pressurizer is formed though model encapsulation. Dynamic simulation curves of main parameters are also shown. At last, comparisons between the theoretical analysis and simulation results show that the pressurizer model of two imbalance districts is reasonable. (authors)

  4. Model reduction and physical understanding of slowly oscillating processes : the circadian cycle.

    Energy Technology Data Exchange (ETDEWEB)

    Goussis, Dimitris A. (Ploutonos 7, Palaio Faliro, Greece); Najm, Habib N.

    2006-01-01

    A differential system that models the circadian rhythm in Drosophila is analyzed with the computational singular perturbation (CSP) algorithm. Reduced nonstiff models of prespecified accuracy are constructed, the form and size of which are time-dependent. When compared with conventional asymptotic analysis, CSP exhibits superior performance in constructing reduced models, since it can algorithmically identify and apply all the required order of magnitude estimates and algebraic manipulations. A similar performance is demonstrated by CSP in generating data that allow for the acquisition of physical understanding. It is shown that the processes driving the circadian cycle are (i) mRNA translation into monomer protein, and monomer protein destruction by phosphorylation and degradation (along the largest portion of the cycle); and (ii) mRNA synthesis (along a short portion of the cycle). These are slow processes. Their action in driving the cycle is allowed by the equilibration of the fastest processes; (1) the monomer dimerization with the dimer dissociation (along the largest portion of the cycle); and (2) the net production of monomer+dimmer proteins with that of mRNA (along the short portion of the cycle). Additional results (regarding the time scales of the established equilibria, their origin, the rate limiting steps, the couplings among the variables, etc.) highlight the utility of CSP for automated identification of the important underlying dynamical features, otherwise accessible only for simple systems whose various suitable simplifications can easily be recognized.

  5. Simplification of Process Integration Studies in Intermediate Size Industries

    DEFF Research Database (Denmark)

    Dalsgård, Henrik; Petersen, P. M.; Qvale, Einar Bjørn

    2002-01-01

    associated with a given process integration study in an intermediate size industry. This is based on the observation that the systems that eventually result from a process integration project and that are economically and operationally most interesting are also quite simple. Four steps that may be used......It can be argued that the largest potential for energy savings based on process integration is in the intermediate size industry. But this is also the industrial scale in which it is most difficult to make the introduction of energy saving measures economically interesting. The reasons......' and therefore lead to non-optimal economic solutions, which may be right. But the objective of the optimisation is not to reach the best economic solution, but to relatively quickly develop the design of a simple and operationally friendly network without losing too much energy saving potential. (C) 2002...

  6. Development of a global aerosol model using a two-dimensional sectional method: 1. Model design

    Science.gov (United States)

    Matsui, H.

    2017-08-01

    This study develops an aerosol module, the Aerosol Two-dimensional bin module for foRmation and Aging Simulation version 2 (ATRAS2), and implements the module into a global climate model, Community Atmosphere Model. The ATRAS2 module uses a two-dimensional (2-D) sectional representation with 12 size bins for particles from 1 nm to 10 μm in dry diameter and 8 black carbon (BC) mixing state bins. The module can explicitly calculate the enhancement of absorption and cloud condensation nuclei activity of BC-containing particles by aging processes. The ATRAS2 module is an extension of a 2-D sectional aerosol module ATRAS used in our previous studies within a framework of a regional three-dimensional model. Compared with ATRAS, the computational cost of the aerosol module is reduced by more than a factor of 10 by simplifying the treatment of aerosol processes and 2-D sectional representation, while maintaining good accuracy of aerosol parameters in the simulations. Aerosol processes are simplified for condensation of sulfate, ammonium, and nitrate, organic aerosol formation, coagulation, and new particle formation processes, and box model simulations show that these simplifications do not substantially change the predicted aerosol number and mass concentrations and their mixing states. The 2-D sectional representation is simplified (the number of advected species is reduced) primarily by the treatment of chemical compositions using two interactive bin representations. The simplifications do not change the accuracy of global aerosol simulations. In part 2, comparisons with measurements and the results focused on aerosol processes such as BC aging processes are shown.

  7. Modelling of Spherical Gas Bubble Oscillations and Sonoluminescence

    Science.gov (United States)

    Prosperetti, A.; Hao, Y.

    1999-01-01

    The discovery of single-bubble sonoluminescence has led to a renewed interest in the forced radial oscillations of gas bubbles. Many of the more recent studies devoted to this topic have used several simplifications in the modelling, and in particular in accounting for liquid compressibility and thermal processes in the bubble. In this paper the significance of these simplifications is explored by contrasting the results of Lohse and co-workers with those of a more detailed model. It is found that, even though there may be little apparent difference between the radius-versus time behaviour of the bubble as predicted by the two models, quantities such as the spherical stability boundary and the threshold for rectified diffusion are affected in a quantitatively significant way. These effects are a manifestation of the subtle dependence upon dissipative processes of the phase of radial motion with respect to the driving sound field. The parameter space region, where according to the theory of Lohse and co-workers, sonoluminescence should be observable, is recalculated with the new model and is found to be enlarged with respect to the earlier estimate. The dependence of this parameter region on sound frequency is also illustrated.

  8. Radiative processes in gauge theories

    International Nuclear Information System (INIS)

    Berends, F.A.; Kleiss, R.; Danckaert, D.; Causmaecker, P. De; Gastmans, R.; Troost, W.; Tai Tsun Wu

    1982-01-01

    It is shown how the introduction of explicit polarization vectors of the radiated gauge particles leads to great simplifications in the calculation of bremsstrahlung processes at high energies. (author)

  9. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    Science.gov (United States)

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  10. Reference Models for Multi-Layer Tissue Structures

    Science.gov (United States)

    2016-09-01

    function of multi-layer tissues (etiology and management of pressure ulcers ). What was the impact on other disciplines? As part of the project, a data...simplification to develop cost -effective models of surface manipulation of multi-layer tissues. Deliverables. Specimen- (or subject) and region-specific...simplification to develop cost -effective models of surgical manipulation. Deliverables. Specimen-specific surrogate models of upper legs confirmed against data

  11. Simplification of arboreal marsupial assemblages in response to increasing urbanization.

    Science.gov (United States)

    Isaac, Bronwyn; White, John; Ierodiaconou, Daniel; Cooke, Raylene

    2014-01-01

    Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density) but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.

  12. Treatment simplification in HIV-infected adults as a strategy to prevent toxicity, improve adherence, quality of life and decrease healthcare costs

    Directory of Open Access Journals (Sweden)

    Vitória M

    2011-07-01

    Full Text Available Jean B Nachega1–3, Michael J Mugavero4, Michele Zeier2, Marco Vitória5, Joel E Gallant3,61Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 2Department of Medicine and Centre for Infectious Diseases (CID, Stellenbosch University, Faculty of Health Sciences, Cape Town, South Africa; 3Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 4Division of Infectious Diseases, Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA; 5HIV Department, World Health Organization, Geneva, Switzerland; 6Department of Medicine, Division of Infectious Diseases, Johns Hopkins University School of Medicine, Baltimore, MD, USAAbstract: Since the advent of highly active antiretroviral therapy (HAART, the treatment of human immunodeficiency virus (HIV infection has become more potent and better tolerated. While the current treatment regimens still have limitations, they are more effective, more convenient, and less toxic than regimens used in the early HAART era, and new agents, formulations and strategies continue to be developed. Simplification of therapy is an option for many patients currently being treated with antiretroviral therapy (ART. The main goals are to reduce pill burden, improve quality of life and enhance medication adherence, while minimizing short- and long-term toxicities, reducing the risk of virologic failure and maximizing cost-effectiveness. ART simplification strategies that are currently used or are under study include the use of once-daily regimens, less toxic drugs, fixed-dose coformulations and induction-maintenance approaches. Improved adherence and persistence have been observed with the adoption of some of these strategies. The role of regimen simplification has implications not only for individual patients, but also for health care policy. With increased interest in ART regimen simplification, it is critical to

  13. Computer control system synthesis for nuclear power plants through simplification and partitioning of the complex system model into a set of simple subsystems

    International Nuclear Information System (INIS)

    Zobor, E.

    1978-12-01

    The approach chosen is based on the hierarchical control systems theory, however, the fundamentals of other approaches such as the systems simplification and systems partitioning are briefly summarized for introducing the problems associated with the control of large scale systems. The concept of a hierarchical control system acting in broad variety of operating conditions is developed and some practical extensions to the hierarchical control system approach e.g. subsystems measured and controlled with different rates, control of the partial state vector, coordination for autoregressive models etc. are given. Throughout the work the WWR-SM research reactor of the Institute has been taken as a guiding example and simple methods for the identification of the model parameters from a reactor start-up are discussed. Using the PROHYS digital simulation program elaborated in the course of the present research, detailed simulation studies were carried out for investigating the performance of a control system based on the concept and algorithms developed. In order to give a real application evidence, a short description is finally given about the closed-loop computer control system installed - in the framework of a project supported by the Hungarian State Office for Technical Development - at the WWR-SM research reactor where the results obtained in the present IAEA Research Contract were successfully applied and furnished the expected high performance

  14. Exploring the spatial distribution of light interception and photosynthesis of canopies by means of a functional-structural plant model

    NARCIS (Netherlands)

    Sarlikioti, V.; Visser, de P.H.B.; Marcelis, L.F.M.

    2011-01-01

    Background and Aims - At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy

  15. One-dimensional models for mountain-river morphology

    NARCIS (Netherlands)

    Sieben, A.

    1996-01-01

    In this report, some classical and new simplifications in mathematical and numerical models for river morphology are compared for conditions representing rivers in mountainous areas (high values of Froude numbers and relatively large values of sediment transport rates). Options for simplification

  16. Subthalamic stimulation: toward a simplification of the electrophysiological procedure.

    Science.gov (United States)

    Fetter, Damien; Derrey, Stephane; Lefaucheur, Romain; Borden, Alaina; Wallon, David; Chastan, Nathalie; Maltete, David

    2016-06-01

    The aim of the present study was to assess the consequences of a simplification of the electrophysiological procedure on the post-operative clinical outcome after subthalamic nucleus implantation in Parkinson disease. Microelectrode recordings were performed on 5 parallel trajectories in group 1 and less than 5 trajectories in group 2. Clinical evaluations were performed 1 month before and 6 months after surgery. After surgery, the UPDRS III score in the off-drug/on-stimulation and on-drug/on-stimulation conditions significantly improved by 66,9% and 82%, respectively in group 1, and by 65.8% and 82.3% in group 2 (P<0.05). Meanwhile, the total number of words (P<0.05) significantly decreased for fluency tasks in both groups. Motor disability improvement and medication reduction were similar in both groups. Our results suggest that the electrophysiological procedure should be simplified as the team's experience increases.

  17. Is Dysfunctional Use of the Mobile Phone a Behavioural Addiction? Confronting Symptom-Based Versus Process-Based Approaches.

    Science.gov (United States)

    Billieux, Joël; Philippot, Pierre; Schmid, Cécile; Maurage, Pierre; De Mol, Jan; Van der Linden, Martial

    2015-01-01

    Dysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance. The addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Generalizing on best practices in image processing: a model for promoting research integrity: Commentary on: Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images.

    Science.gov (United States)

    Benos, Dale J; Vollmer, Sara H

    2010-12-01

    Modifying images for scientific publication is now quick and easy due to changes in technology. This has created a need for new image processing guidelines and attitudes, such as those offered to the research community by Doug Cromey (Cromey 2010). We suggest that related changes in technology have simplified the task of detecting misconduct for journal editors as well as researchers, and that this simplification has caused a shift in the responsibility for reporting misconduct. We also argue that the concept of best practices in image processing can serve as a general model for education in best practices in research.

  19. Simplification of arboreal marsupial assemblages in response to increasing urbanization.

    Directory of Open Access Journals (Sweden)

    Bronwyn Isaac

    Full Text Available Arboreal marsupials play an essential role in ecosystem function including regulating insect and plant populations, facilitating pollen and seed dispersal and acting as a prey source for higher-order carnivores in Australian environments. Primarily, research has focused on their biology, ecology and response to disturbance in forested and urban environments. We used presence-only species distribution modelling to understand the relationship between occurrences of arboreal marsupials and eco-geographical variables, and to infer habitat suitability across an urban gradient. We used post-proportional analysis to determine whether increasing urbanization affected potential habitat for arboreal marsupials. The key eco-geographical variables that influenced disturbance intolerant species and those with moderate tolerance to disturbance were natural features such as tree cover and proximity to rivers and to riparian vegetation, whereas variables for disturbance tolerant species were anthropogenic-based (e.g., road density but also included some natural characteristics such as proximity to riparian vegetation, elevation and tree cover. Arboreal marsupial diversity was subject to substantial change along the gradient, with potential habitat for disturbance-tolerant marsupials distributed across the complete gradient and potential habitat for less tolerant species being restricted to the natural portion of the gradient. This resulted in highly-urbanized environments being inhabited by a few generalist arboreal marsupial species. Increasing urbanization therefore leads to functional simplification of arboreal marsupial assemblages, thus impacting on the ecosystem services they provide.

  20. Using subdivision surfaces and adaptive surface simplification algorithms for modeling chemical heterogeneities in geophysical flows

    Science.gov (United States)

    Schmalzl, JöRg; Loddoch, Alexander

    2003-09-01

    We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.

  1. On the role of model structure in hydrological modeling : Understanding models

    NARCIS (Netherlands)

    Gharari, S.

    2016-01-01

    Modeling is an essential part of the science of hydrology. Models enable us to formulate what we know and perceive from the real world into a neat package. Rainfall-runoff models are abstract simplifications of how a catchment works. Within the research field of scientific rainfall-runoff modeling,

  2. Modelling long-term redox processes and oxygen scavenging in fractured crystalline rocks

    International Nuclear Information System (INIS)

    Sidborn, Magnus

    2007-10-01

    Advanced plans for the construction of a deep geological repository for highly radioactive wastes from nuclear power plants have evolved during the past decades in many countries including Sweden. As part of the Swedish concept, the waste is to be encapsulated in canisters surrounded by low permeability backfill material. The copper canisters will be deposited at around 500 metres depth in granitic rock, which acts as a natural barrier for the transport of radionuclides to the ground surface. These natural and engineered barriers are chosen and designed to ensure the safety of the repository over hundred of thousands of years. One issue of interest for the safety assessment of such a repository is the redox evolution over long times. An oxidising environment would enhance the corrosion of the copper canisters, and increases the mobility of any released radionuclides. In the first part of the present thesis, the ability of the host rock to ensure a reducing environment at repository depth over long times was studied. A model framework was developed with the aim to capture all processes that are deemed to be important for the scavenging of intruding oxygen from the ground surface over long times. Simplifications allowing for analytical solutions were introduced for transparency reasons so that evaluation of results is straight-forward, and so that uncertain parameter values easily can be adjusted. More complex systems were solved numerically for cases when the analytical simplifications are not applicable, and to validate the simplifications underlying the analytical solutions. Results were presented for prevailing present day conditions as well as for conditions deemed to be likely during the melting phase of a period of glaciation. It was shown that the hydraulic properties have a great influence on the oxygen intrusion length downstream along flow-paths in the rock. An important parameter that determines the extent of interaction between the dissolved oxygen and

  3. Modeling of hydrogenation reactor of soya oil

    International Nuclear Information System (INIS)

    Sotudeh-Gharebagh, R.; Niknam, L.; Mostoufi, N.

    2008-01-01

    In this paper, a batch hydrogenation reactor performance was modeled using a hydrodynamic and reaction sub-models. The reaction expressions were obtained from the information reported in literature. Experimental studies were conducted in order to generate the experimental data needed to validate the model. The comparison between the experimental data and model predictions seems quite satisfactory considering the hydrodynamic limitations and simplifications made on the reaction scheme. The results of this study could be considered as framework in developing new process equipment and also soya oil product design for new applications

  4. Elements of complexity in subsurface modeling, exemplified with three case studies

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  5. Numerical Simulation of Cyclic Thermodynamic Processes

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård

    2006-01-01

    This thesis is on numerical simulation of cyclic thermodynamic processes. A modelling approach and a method for finding periodic steady state solutions are described. Examples of applications are given in the form of four research papers. Stirling machines and pulse tube coolers are introduced...... and a brief overview of the current state of the art in methods for simulating such machines is presented. It was found that different simulation approaches, which model the machines with different levels of detail, currently coexist. Methods using many simplifications can be easy to use and can provide...... models flexible and easy to modify, and to make simulations fast. A high level of accuracy was achieved for integrations of a model created using the modelling approach; the accuracy depended on the settings for the numerical solvers in a very predictable way. Selection of fast numerical algorithms...

  6. Investigation on the optimal simplified model of BIW structure using FEM

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Shojaeefard

    Full Text Available Abstract At conceptual phases of designing a vehicle, engineers need simplified models to examine the structural and functional characteristics and apply custom modifications for achieving the best vehicle design. Using detailed finite-element (FE model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body-in-white (BIW, composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the extent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be regarded for simplification to optimize costs and time, while providing acceptable accuracy. Although different approaches for optimization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best models, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which examines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experimental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE-modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.

  7. Simplified model for determining local heat flux boundary conditions for slagging wall

    Energy Technology Data Exchange (ETDEWEB)

    Bingzhi Li; Anders Brink; Mikko Hupa [Aabo Akademi University, Turku (Finland). Process Chemistry Centre

    2009-07-15

    In this work, two models for calculating heat transfer through a cooled vertical wall covered with a running slag layer are investigated. The first one relies on a discretization of the velocity equation, and the second one relies on an analytical solution. The aim is to find a model that can be used for calculating local heat flux boundary conditions in computational fluid dynamics (CFD) analysis of such processes. Two different cases where molten deposits exist are investigated: the black liquor recovery boiler and the coal gasifier. The results show that a model relying on discretization of the velocity equation is more flexible in handling different temperature-viscosity relations. Nevertheless, a model relying on an analytical solution is the one fast enough for a potential use as a CFD submodel. Furthermore, the influence of simplifications to the heat balance in the model is investigated. It is found that simplification of the heat balance can be applied when the radiation heat flux is dominant in the balance. 9 refs., 7 figs., 10 tabs.

  8. Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.

    Science.gov (United States)

    Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J

    2016-08-01

    To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.

  9. Process-based modelling of NH3 exchange with grazed grasslands

    Science.gov (United States)

    Móring, Andrea; Vieno, Massimo; Doherty, Ruth M.; Milford, Celia; Nemitz, Eiko; Twigg, Marsailidh M.; Horváth, László; Sutton, Mark A.

    2017-09-01

    In this study the GAG model, a process-based ammonia (NH3) emission model for urine patches, was extended and applied for the field scale. The new model (GAG_field) was tested over two modelling periods, for which micrometeorological NH3 flux data were available. Acknowledging uncertainties in the measurements, the model was able to simulate the main features of the observed fluxes. The temporal evolution of the simulated NH3 exchange flux was found to be dominated by NH3 emission from the urine patches, offset by simultaneous NH3 deposition to areas of the field not affected by urine. The simulations show how NH3 fluxes over a grazed field in a given day can be affected by urine patches deposited several days earlier, linked to the interaction of volatilization processes with soil pH dynamics. Sensitivity analysis showed that GAG_field was more sensitive to soil buffering capacity (β), field capacity (θfc) and permanent wilting point (θpwp) than the patch-scale model. The reason for these different sensitivities is dual. Firstly, the difference originates from the different scales. Secondly, the difference can be explained by the different initial soil pH and physical properties, which determine the maximum volume of urine that can be stored in the NH3 source layer. It was found that in the case of urine patches with a higher initial soil pH and higher initial soil water content, the sensitivity of NH3 exchange to β was stronger. Also, in the case of a higher initial soil water content, NH3 exchange was more sensitive to the changes in θfc and θpwp. The sensitivity analysis showed that the nitrogen content of urine (cN) is associated with high uncertainty in the simulated fluxes. However, model experiments based on cN values randomized from an estimated statistical distribution indicated that this uncertainty is considerably smaller in practice. Finally, GAG_field was tested with a constant soil pH of 7.5. The variation of NH3 fluxes simulated in this way

  10. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  11. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    Science.gov (United States)

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  12. Effects of model layer simplification using composite hydraulic properties

    Science.gov (United States)

    Kuniansky, Eve L.; Sepulveda, Nicasio; Elango, Lakshmanan

    2011-01-01

    Groundwater provides much of the fresh drinking water to more than 1.5 billion people in the world (Clarke et al., 1996) and in the United States more that 50 percent of citizens rely on groundwater for drinking water (Solley et al., 1998). As aquifer systems are developed for water supply, the hydrologic system is changed. Water pumped from the aquifer system initially can come from some combination of inducing more recharge, water permanently removed from storage, and decreased groundwater discharge. Once a new equilibrium is achieved, all of the pumpage must come from induced recharge and decreased discharge (Alley et al., 1999). Further development of groundwater resources may result in reductions of surface water runoff and base flows. Competing demands for groundwater resources require good management. Adequate data to characterize the aquifers and confining units of the system, like hydrologic boundaries, groundwater levels, streamflow, and groundwater pumping and climatic data for recharge estimation are to be collected in order to quantify the effects of groundwater withdrawals on wetlands, streams, and lakes. Once collected, three-dimensional (3D) groundwater flow models can be developed and calibrated and used as a tool for groundwater management. The main hydraulic parameters that comprise a regional or subregional model of an aquifer system are the hydraulic conductivity and storage properties of the aquifers and confining units (hydrogeologic units) that confine the system. Many 3D groundwater flow models used to help assess groundwater/surface-water interactions require calculating ?effective? or composite hydraulic properties of multilayered lithologic units within a hydrogeologic unit. The calculation of composite hydraulic properties stems from the need to characterize groundwater flow using coarse model layering in order to reduce simulation times while still representing the flow through the system accurately. The accuracy of flow models with

  13. The limitations of mathematical modeling in high school physics education

    Science.gov (United States)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems

  14. A new model for the simplification of particle counting data

    Directory of Open Access Journals (Sweden)

    M. F. Fadal

    2012-06-01

    Full Text Available This paper proposes a three-parameter mathematical model to describe the particle size distribution in a water sample. The proposed model offers some conceptual advantages over two other models reported on previously, and also provides a better fit to the particle counting data obtained from 321 water samples taken over three years at a large South African drinking water supplier. Using the data from raw water samples taken from a moderately turbid, large surface impoundment, as well as samples from the same water after treatment, typical ranges of the model parameters are presented for both raw and treated water. Once calibrated, the model allows the calculation and comparison of total particle number and volumes over any randomly selected size interval of interest.

  15. Methodologies for Systematic Assessment of Design Simplification. Annex II

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-12-15

    Nuclear power plants are sophisticated engineered systems. To achieve a commercial nuclear power plant, its functions, systems and components need to be elaborated from design ideas to technical solutions and to the appropriate hardware over a long period of time. On the way, several design alternatives usually compete for implementation in the final plant. Engineering teams perform assessments, comparing different proposed engineering options in order to select an appropriate solution for the specific plant aimed at specific customers. This is a common process in design evolution. During such assessments, the trade-offs associated with different options are not always as simple as seen at very early design stages. Any requirement (e.g. relevant to safety, availability or competitiveness) usually has several dimensions; therefore, a change in the design aimed at producing the targeted effect (e.g. simplification of passive safety systems) as a rule produces other effects not directly related to the original idea. It means that the assessment needs to be carried out in iterations, not to bypass any meaningful feedback. The assessment then becomes a challenge for those designers who are interested in exploring innovative approaches and simplified systems. Unlike in several developed countries, so far, nuclear energy has been only marginally used in small and medium sized developing countries. One of the important reasons for this has been the lack of competitive commercial nuclear options with small and medium sized reactors (SMRs). Then, the challenge for SMR designers has been to design simpler plants in order to counterbalance the well known penalties of economy of scale. The lack of experience with SMRs in small and medium sized developing countries could be viewed as practical proof of the lack of commercial success of such reactors. Fossil fuelled gas turbine technologies offer very competitive energy options available from tens to hundreds of MW(e), with

  16. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  17. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  18. Estimating Diurnal Courses of Gross Primary Production for Maize: A Comparison of Sun-Induced Chlorophyll Fluorescence, Light-Use Efficiency and Process-Based Models

    Directory of Open Access Journals (Sweden)

    Tianxiang Cui

    2017-12-01

    Full Text Available Accurately quantifying gross primary production (GPP is of vital importance to understanding the global carbon cycle. Light-use efficiency (LUE models and process-based models have been widely used to estimate GPP at different spatial and temporal scales. However, large uncertainties remain in quantifying GPP, especially for croplands. Recently, remote measurements of solar-induced chlorophyll fluorescence (SIF have provided a new perspective to assess actual levels of plant photosynthesis. In the presented study, we evaluated the performance of three approaches, including the LUE-based multi-source data synergized quantitative (MuSyQ GPP algorithm, the process-based boreal ecosystem productivity simulator (BEPS model, and the SIF-based statistical model, in estimating the diurnal courses of GPP at a maize site in Zhangye, China. A field campaign was conducted to acquire synchronous far-red SIF (SIF760 observations and flux tower-based GPP measurements. Our results showed that both SIF760 and GPP were linearly correlated with APAR, and the SIF760-GPP relationship was adequately characterized using a linear function. The evaluation of the modeled GPP against the GPP measured from the tower demonstrated that all three approaches provided reasonable estimates, with R2 values of 0.702, 0.867, and 0.667 and RMSE values of 0.247, 0.153, and 0.236 mg m−2 s−1 for the MuSyQ-GPP, BEPS and SIF models, respectively. This study indicated that the BEPS model simulated the GPP best due to its efficiency in describing the underlying physiological processes of sunlit and shaded leaves. The MuSyQ-GPP model was limited by its simplification of some critical ecological processes and its weakness in characterizing the contribution of shaded leaves. The SIF760-based model demonstrated a relatively limited accuracy but showed its potential in modeling GPP without dependency on climate inputs in short-term studies.

  19. Dynamic experiments with high bisphenol-A concentrations modelled with an ASM model extended to include a separate XOC degrading microorganism

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Press-Kristensen, Kåre; Vanrolleghem, P.A.

    2009-01-01

    The perspective of this work is to develop a model, which can be used to better understand and optimize wastewater treatment plants that are able to remove xenobiotic organic compounds (XOCs) in combination with removal of traditional pollutants. Results from dynamic experiments conducted...... with the endocrine disrupting XOC bisphenol-A (BPA) in an activated sludge process with real wastewater were used to hypothesize an ASM-based process model including aerobic growth of a specific BPA-degrading microorganism and sorption of BPA to sludge. A parameter estimation method was developed, which...... simultaneously utilizes steady-state background concentrations and dynamic step response data, as well as conceptual simplifications of the plant configuration. Validation results show that biodegradation of BPA is sensitive to operational conditions before and during the experiment and that the proposed model...

  20. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  1. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  2. An Analysis of Simplification Strategies in a Reading Textbook of Japanese as a Foreign Language

    Directory of Open Access Journals (Sweden)

    Kristina HMELJAK SANGAWA

    2016-06-01

    Full Text Available Reading is one of the bases of second language learning, and it can be most effective when the linguistic difficulty of the text matches the reader's level of language proficiency. The present paper reviews previous research on the readability and simplification of Japanese texts, and presents an analysis of a collection of simplified texts for learners of Japanese as a foreign language. The simplified texts are compared to their original versions to uncover different strategies used to make the texts more accessible to learners. The list of strategies thus obtained can serve as useful guidelines for assessing, selecting, and devising texts for learners of Japanese as a foreign language.

  3. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  4. Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?

    Science.gov (United States)

    Sprintsin, M.; Chen, J. M.

    2009-05-01

    The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit

  5. Use of simplified models in the performance assessment of a high-level waste repository system in Japan

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mohanty, Sitakanta; Kanno, Takeshi; Tochigi, Yoshikatsu

    2005-01-01

    This paper explores simplifications to the H12 performance assessment model to enhance performance in Monte Carlo analyses. It is shown that similar reference case results to those of the H12 model can be derived by describing the buffer material surrounding a waste package as a planar body. Other possible simplifications to the performance assessment model in areas related to the stratification of the host rock transmissivity domain and solubility constraints in the buffer material are explored. (author)

  6. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  7. Ecosystem models are by definition simplifications of the real ...

    African Journals Online (AJOL)

    spamer

    to calculate changes in total phytoplankton vegetative biomass with time ... into account when modelling phytoplankton population dynamics. ... Then, the means whereby the magnitude of ..... There was increased heat input and slight stratification from mid to ... conditions must be optimal and the water should be extremely ...

  8. a Geometric Processing Workflow for Transforming Reality-Based 3d Models in Volumetric Meshes Suitable for Fea

    Science.gov (United States)

    Gonizzi Barsanti, S.; Guidi, G.

    2017-02-01

    Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.

  9. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    Science.gov (United States)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  10. Simplification of the processing of milled aluminium powder and mechanical evaluation properties

    International Nuclear Information System (INIS)

    Cintas, J.; Rodriguez, J. A.; Gallardo, J. M.; Herrera, E. J.

    2001-01-01

    An alternative powder.metallurgy consolidation method of milled aluminium (M Al) powder, consisting in a double cycle of cold pressing and vacuum sintering, has been developed. The aim of the present investigation is to simplify this consolidation method, from the original five steps to only three steps. This would be possible since milled powders soften during desassing, at high temperature. The mechanical properties of compacts (hardness at room and high temperature, ultimate tensile strength and elongation) obtained by the three-step and the five-step processing are comparable. This process could be ol special interest for the manufacturing of large series of small parts, such as are used in the automotive industry. (Author) 10 refs

  11. Improvement of the low frequency oscillation model for Hall thrusters

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chunsheng, E-mail: wangcs@hit.edu.cn; Wang, Huashan [Yanshan University, College of Vehicles and Energy, Qinhuangdao 066004, Hebei (China)

    2016-08-15

    The low frequency oscillation of the discharge current in Hall thrusters is a major aspect of these devices that requires further study. While the existing model captures the ionization mechanism of the low frequency oscillation, it unfortunately fails to express the dynamic characteristics of the ion acceleration. The analysis in this paper shows this is because of the simplification of the electron equation, which affects both the electric field distribution and the ion acceleration process. Additionally, the electron density equation is revised and a new model that is based on the physical properties of ion movement is proposed.

  12. Content of nitrates in potato tubers depending on the organic matter, soil fertilizer, cultivation simplifications applied and storage

    Directory of Open Access Journals (Sweden)

    Jaroslaw Pobereżny

    2015-03-01

    Full Text Available Nitrates naturally occur in plant-based food. Nitrates content in consumable plant organs is small and should not raise concern provided that the recommended fertilization and harvest terms of the original plants are observed. The aim was to determine the effect of the application of various organic matter of soil fertilizer and simplifications in growing potato (Solanum tuberosum L. on the content of nitrates in the tubers of mid-early cultivar 'Satina' after harvest and after 6-mo of storage. Introducing cultivation simplification involves limiting mineral fertilization by 50% as well as chemical protection limitation. The soil fertilizer was used: 0.6 (autumn, 0.3 (spring, and 0.3 L ha-1 (during the vegetation period. The content of nitrates, was determined with the use of the ion-selective method (multi-purpose computer device CX-721, Elmetron. The lowest amount of nitrates was recorded in the tubers from the plots without the application of organic matter with a 50% rate of mineral fertilization with soil fertilizer (120.5 mg kg-1 FW. The use of varied organic matter resulted in a significant increase in the content of nitrates in tubers and the lowest effect on their accumulation was reported for straw. The soil fertilizer used significantly decreased the content of nitrates in tubers by 15% for 100% NPK and 10.4% for 50% NPK. After 6-mo storage, irrespective of the experiment factors, the content of nitrates decreased in the fertilization experiment by 26% and in the experiment with a limited protection - by 19.9%.

  13. ERUPTION TO DOSE: COUPLING A TEPHRA DISPERSAL MODEL WITHIN A PERFORMANCE ASSESSMENT FRAMEWORK

    International Nuclear Information System (INIS)

    G. N. Keating, J. Pelletier

    2005-01-01

    The tephra dispersal model used by the Yucca Mountain Project (YMP) to evaluate the potential consequences of a volcanic eruption through the waste repository must incorporate simplifications in order to function within a large Monte-Carlo style performance assessment framework. That is, the explicit physics of the conduit, vent, and eruption column processes are abstracted to a 2-D, steady-state advection-dispersion model (ASHPLUME) that can be run quickly over thousands of realizations of the overall system model. Given the continuous development of tephra dispersal modeling techniques in the last few years, we evaluated the adequacy of this simplified model for its intended purpose within the YMP total system performance assessment (TSPA) model. We evaluated uncertainties inherent in model simplifications including (1) instantaneous, steady-state vs. unsteady eruption, which affects column height, (2) constant wind conditions, and (3) power-law distribution of the tephra blanket; comparisons were made to other models and published ash distributions. Spatial statistics are useful for evaluating differences in these model output vs. results using more complex wind, column height, and tephra deposition patterns. However, in order to assess the adequacy of the model for its intended use in TSPA, we evaluated the propagation of these uncertainties through FAR, the YMP ash redistribution model, which utilizes ASHPLUME tephra deposition results to calculate the concentration of nuclear waste-contaminated tephra at a dose-receptor population as a result of sedimentary transport and mixing processes on the landscape. Questions we sought to answer include: (1) what conditions of unsteadiness, wind variability, or departure from simplified tephra distribution result in significant effects on waste concentration (related to dose calculated for the receptor population)? (2) What criteria can be established for the adequacy of a tephra dispersal model within the TSPA

  14. Injury Based on Its Study in Experimental Models

    Directory of Open Access Journals (Sweden)

    M. Mendes-Braz

    2012-01-01

    Full Text Available The present review focuses on the numerous experimental models used to study the complexity of hepatic ischemia/reperfusion (I/R injury. Although experimental models of hepatic I/R injury represent a compromise between the clinical reality and experimental simplification, the clinical transfer of experimental results is problematic because of anatomical and physiological differences and the inevitable simplification of experimental work. In this review, the strengths and limitations of the various models of hepatic I/R are discussed. Several strategies to protect the liver from I/R injury have been developed in animal models and, some of these, might find their way into clinical practice. We also attempt to highlight the fact that the mechanisms responsible for hepatic I/R injury depend on the experimental model used, and therefore the therapeutic strategies also differ according to the model used. Thus, the choice of model must therefore be adapted to the clinical question being answered.

  15. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  16. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  17. Assessing the Impact of Canopy Structure Simplification in Common Multilayer Models on Irradiance Absorption Estimates of Measured and Virtually Created Fagus sylvatica (L. Stands

    Directory of Open Access Journals (Sweden)

    Pol Coppin

    2009-11-01

    of leaves differed significantly between a multilayer representation and a 3D architecture canopy of the same LAI. The deviations in irradiance absorbance were caused by canopy structure, clumping and positioning of leaves. Although it was found that the use of canopy simplifications for modelling purposes in closed canopies is demonstrated as a valid option, special care should be taken when considering forest stands irradiance simulation for sparse canopies and particularly on higher sun zenith angles where the surrounding trees strongly affect the absorbed irradiance and results can highly deviate from the multilayer assumptions.

  18. Visualizing the process of process modeling with PPMCharts

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.; La Rosa, M.; Soffer, P.

    2013-01-01

    In the quest for knowledge about how to make good process models, recent research focus is shifting from studying the quality of process models to studying the process of process modeling (often abbreviated as PPM) itself. This paper reports on our efforts to visualize this specific process in such

  19. A Model of Political Violence

    Science.gov (United States)

    2015-01-01

    drug addiction , crime). Additionally, the focus on expectations fails is an over-simplification of the decision making process that determines who is...and informal organizations rely upon social and professional networks.13 The rise of social media such as Twitter, Facebook , and LinkedIn greatly

  20. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge

    2016-01-01

    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  1. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge

    2017-01-01

    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  2. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    Science.gov (United States)

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  3. Operation room tool handling and miscommunication scenarios: an object-process methodology conceptual model.

    Science.gov (United States)

    Wachs, Juan P; Frenkel, Boaz; Dori, Dov

    2014-11-01

    Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at

  4. Business process model repositories : efficient process retrieval

    NARCIS (Netherlands)

    Yan, Z.

    2012-01-01

    As organizations increasingly work in process-oriented manner, the number of business process models that they develop and have to maintain increases. As a consequence, it has become common for organizations to have collections of hundreds or even thousands of business process models. When a

  5. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  6. Modeling Aspects of Activated Sludge Processes Part l l: Mathematical Process Modeling and Biokinetics of Activated Sludge Processes

    Energy Technology Data Exchange (ETDEWEB)

    AbdElHaleem, H S [Cairo Univ.-CivlI Eng. Dept., Giza (Egypt); EI-Ahwany, A H [CairoUlmrsity- Faculty ofEngincering - Chemical Engineering Department, Giza (Egypt); Ibrahim, H I [Helwan University- Faculty of Engineering - Biomedical Engineering Department, Helwan (Egypt); Ibrahim, G [Menofia University- Faculty of Engineering Sbebin EI Kom- Basic Eng. Sc. Dept., Menofia (Egypt)

    2004-07-01

    Mathematical process modeling and biokinetics of activated sludge process were reviewed considering different types of models. It has been evaluated the task group models of ASMI. and 2, and 3 versioned by Henze et al considering the conditions of each model and the different processes of which every model consists. It is revealed that ASMI contains some defects avoided in ASM3. Relied on homogeneity, Models can be classified into homogenous models characterized by taking the activated sludge process as one phase. In this type of models, the internal mass transfer inside the floes was neglected.. Hence, the kinetic parameter produces can be considered inaccurate. The other type of models is the heterogeneous model This type considers the mass transfer operations in addition to the biochemical reaction processes; hence, the resulted kinetic parameters can be considered more accurate than that of homogenous type.

  7. Modeling Aspects of Activated Sludge Processes Part l l: Mathematical Process Modeling and Biokinetics of Activated Sludge Processes

    International Nuclear Information System (INIS)

    AbdElHaleem, H.S.; EI-Ahwany, A. H.; Ibrahim, H.I.; Ibrahim, G.

    2004-01-01

    Mathematical process modeling and biokinetics of activated sludge process were reviewed considering different types of models. It has been evaluated the task group models of ASMI. and 2, and 3 versioned by Henze et al considering the conditions of each model and the different processes of which every model consists. It is revealed that ASMI contains some defects avoided in ASM3. Relied on homogeneity, Models can be classified into homogenous models characterized by taking the activated sludge process as one phase. In this type of models, the internal mass transfer inside the floes was neglected.. Hence, the kinetic parameter produces can be considered inaccurate. The other type of models is the heterogeneous model This type considers the mass transfer operations in addition to the biochemical reaction processes; hence, the resulted kinetic parameters can be considered more accurate than that of homogenous type

  8. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about the ...

  9. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  10. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling....... the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  11. Angular overlap model in actinides

    International Nuclear Information System (INIS)

    Gajek, Z.; Mulak, J.

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed

  12. Angular overlap model in actinides

    Energy Technology Data Exchange (ETDEWEB)

    Gajek, Z.; Mulak, J. (Polska Akademia Nauk, Wroclaw (PL). Inst. Niskich Temperatur i Badan Strukturalnych)

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed.

  13. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    . These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety......This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  14. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  15. Advertising in the Sznajd Marketing Model

    Science.gov (United States)

    Schulze, Christian

    The traditional Sznajd model, as well as its Ochrombel simplification for opinion spreading, is applied to marketing with the help of advertising. The larger the lattice, the smaller the amount of advertising is needed to convince the whole market.

  16. Fatigue crack growth spectrum simplification: Facilitation of on-board damage prognosis systems

    Science.gov (United States)

    Adler, Matthew Adam

    2009-12-01

    monitoring and management of aircraft. A spectrum reduction method was proposed and experimentally validated that reduces a variable-amplitude spectrum to a constant-amplitude equivalent. The reduction from a variable-amplitude (VA) spectrum to a constant-amplitude equivalent (CAE) was proposed as a two-part process. Preliminary spectrum reduction is first performed by elimination of those loading events shown to be too negligible to significantly contribute to fatigue crack growth. This is accomplished by rainflow counting. The next step is to calculate the appropriate, equivalent maximum and minimum loads by means of a root-mean-square average. This reduced spectrum defines the CAE and replaces the original spectrum. The simplified model was experimentally shown to provide the approximately same fatigue crack growth as the original spectrum. Fatigue crack growth experiments for two dissimilar aircraft spectra across a wide-range of stress-intensity levels validated the proposed spectrum reduction procedure. Irrespective of the initial K-level, the constant-amplitude equivalent spectra were always conservative in crack growth rate, and were so by an average of 50% over the full range tested. This corresponds to a maximum 15% overestimation in driving force Delta K. Given other typical sources of scatter that occur during fatigue crack growth, a consistent 50% conservative prediction on crack growth rate is very satisfying. This is especially attractive given the reduction in cost gained by the simplification. We now have a seamless system that gives an acceptably good approximation of damage occurring in the aircraft. This contribution is significant because in a very simple way we now have given a path to bypass the current infrastructure and ground-support requirements. The decision-making is now a lot simpler. In managing an entire fleet we now have a workable system where the strength is in no need for a massive, isolated computational center. The fidelity of the model

  17. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  18. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    Science.gov (United States)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  19. Advertising effects in Sznajd marketing model

    OpenAIRE

    Christian Schulze

    2002-01-01

    The traditional Sznajd model, as well as its Ochrombel simplification for opinion spreading, are applied to marketing with the help of advertising. The larger the lattice is the smaller is the amount of advertising needed to convince the whole market

  20. Applying the Business Process and Practice Alignment Meta-model: Daily Practices and Process Modelling

    Directory of Open Access Journals (Sweden)

    Ventura Martins Paula

    2017-03-01

    Full Text Available Background: Business Process Modelling (BPM is one of the most important phases of information system design. Business Process (BP meta-models allow capturing informational and behavioural aspects of business processes. Unfortunately, standard BP meta-modelling approaches focus just on process description, providing different BP models. It is not possible to compare and identify related daily practices in order to improve BP models. This lack of information implies that further research in BP meta-models is needed to reflect the evolution/change in BP. Considering this limitation, this paper introduces a new BP meta-model designed by Business Process and Practice Alignment Meta-model (BPPAMeta-model. Our intention is to present a meta-model that addresses features related to the alignment between daily work practices and BP descriptions. Objectives: This paper intends to present a metamodel which is going to integrate daily work information into coherent and sound process definitions. Methods/Approach: The methodology employed in the research follows a design-science approach. Results: The results of the case study are related to the application of the proposed meta-model to align the specification of a BP model with work practices models. Conclusions: This meta-model can be used within the BPPAM methodology to specify or improve business processes models based on work practice descriptions.

  1. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.

  2. Criticism of technology in a state of antagonism. Against simplification, prejudice and ideologies. Technikkritik im Widerstreit. Gegen Vereinfachungen, Vorurteile und Ideologien

    Energy Technology Data Exchange (ETDEWEB)

    Detzer, K A

    1987-01-01

    The book presents a compilation of public lectures, review articles, and statements of opinion from public debates that all refer to topical, socio-political problems in connection with technology and industry, and is intended to reveal structural interdependencies in order to contradict the frequently observed simplifications, prejudices, or ideologies, and in order to point out true arguments that can be used in a fair discussion based on pluralistic principles, about the decisions to be taken. Technology and its impacts on industry, politics, education and ethics. (HSCH).

  3. Birth/birth-death processes and their computable transition probabilities with biological applications.

    Science.gov (United States)

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  4. Synroc processing options

    International Nuclear Information System (INIS)

    Rozsa, R.B.; Hoenig, C.L.

    1981-01-01

    Synroc is a titanate-based ceramic material currently being developed for immobilizing high-level nuclear reactor wastes in solid form. Synroc D is a unique variation of Synroc. It can contain the high-level defense wastes, particularly those in storage at the Savannah River Plant. In this report, we review the early development of the initial Synroc process, discuss modification and other options that simplify it overall, and recommend the future direction of research and development in the processing area. A reference Synroc process is described briefly and contrasted with the Savannah River Laboratory glass-based reference case. Preliminary engineering layouts show Synroc to be a more complex processing operation and, thus, more expensive than the glass-based process. However, we believe that simplifications, which will significantly reduce the cost difference, are possible. Further research and development will continue in the areas of slurry processing, fluidized bed calcination, and mineralization. This last will use sintering, hot uniaxial pressing, or hot isostatic pressing

  5. Modeling multiphase materials processes

    CERN Document Server

    Iguchi, Manabu

    2010-01-01

    ""Modeling Multiphase Materials Processes: Gas-Liquid Systems"" describes the methodology and application of physical and mathematical modeling to multi-phase flow phenomena in materials processing. The book focuses on systems involving gas-liquid interaction, the most prevalent in current metallurgical processes. The performance characteristics of these processes are largely dependent on transport phenomena. This volume covers the inherent characteristics that complicate the modeling of transport phenomena in such systems, including complex multiphase structure, intense turbulence, opacity of

  6. Numerical model describing the heat transfer between combustion products and ventilation-system duct walls

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    A package of physical models simulating the heat transfer processes occurring between combustion gases and ducts in ventilation systems is described. The purpose of the numerical model is to predict how the combustion gas in a system heats up or cools down as it flows through the ducts in a ventilation system under fire conditions. The model treats a duct with (forced convection) combustion gases flowing on the inside and stagnant ambient air on the outside. The model is composed of five submodels of heat transfer processes along with a numerical solution procedure to evaluate them. Each of these quantities is evaluated independently using standard correlations based on experimental data. The details of the physical assumptions, simplifications, and ranges of applicability of the correlations are described. A typical application of this model to a full-scale fire test is discussed, and model predictions are compared with selected experimental data

  7. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    Directory of Open Access Journals (Sweden)

    Feng-Que Pei

    Full Text Available Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  8. Horizontal bioreactor for ethanol production by immobilized cells. Pt. 3. Reactor modeling and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Woehrer, W

    1989-04-05

    A mathematical model which describes ethanol formation in a horizontal tank reactor containing Saccharomyces cerevisiae immobilized in small beads of calcium alignate has been developed. The design equations combine flow dynamics of the reactor as well as product formation kinetics. The model was verified for 11 continuous experiments, where dilution rate, feed glucose concentration and bead volume fraction were varied. The model predicts effluent ethanol concentration and CO/sub 2/ production rate within the experimental error. A simplification of the model is possible, when the feed glucose concentration does not exceed 150 kg/m/sup 3/. The simplification results in an analytical solution of the design equation and hence can easily be applied for design purposes as well as for optimization studies.

  9. Semantic Business Process Modeling

    OpenAIRE

    Markovic, Ivan

    2010-01-01

    This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios.

  10. Maraviroc/raltegravir simplification strategy following 6 months of quadruple therapy with tenofovir/emtricitabine/maraviroc/raltegravir in treatment-naive HIV patients.

    Science.gov (United States)

    Pradat, Pierre; Durant, Jacques; Brochier, Corinne; Trabaud, Mary-Anne; Cottalorda-Dufayard, Jacqueline; Izopet, Jacques; Raffi, François; Lucht, Frédéric; Gagnieu, Marie-Claude; Gatey, Caroline; Jacomet, Christine; Vassallo, Matteo; Dellamonica, Pierre; Cotte, Laurent

    2016-11-01

    We assessed the virological efficacy of a 6 month maraviroc/raltegravir simplification strategy following 6 months of quadruple therapy combining tenofovir disoproxil fumarate/emtricitabine with maraviroc/raltegravir. HIV-1-infected naive patients were enrolled in an open label, single-arm, Phase 2 trial. All patients received maraviroc 300 mg twice daily, raltegravir 400 mg twice daily and tenofovir/emtricitabine for 24 weeks. Patients with stable HIV-RNA HIV-RNA HIV-RNA was 4.3 log copies/mL. All patients had CCR5-tropic viruses by genotropism and phenotropism assays. All but one patient had an HIV-RNA < 50 copies/mL at W24 and entered the simplification phase. Virological success was maintained at W48 in 88% (90% CI 79%-97%) of patients. N155H mutation was detected at failure in one patient. No tropism switch was observed. Raltegravir and maraviroc plasma exposure were satisfactory in 92% and 79% of 41 samples from 21 patients. Five severe adverse events (SAEs) were observed up to W48; none was related to the study drugs. Four patients presented grade 3 AEs; none was related to the study. No grade 4 AE was observed. No patient died. Maraviroc/raltegravir maintenance therapy following a 6 month induction phase with maraviroc/raltegravir/tenofovir/emtricitabine was well tolerated and maintained virological efficacy in these carefully selected patients. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Characterization of the silicon/hydrofluoric acid interface: electrochemical processes under weak potential disturbance

    International Nuclear Information System (INIS)

    Bertagna, Valerie

    1996-01-01

    Within the frame of the increase of the density of integrated circuits, of simplification of cleaning processes and of improvement of control of surface reactions (for a better control of the elimination of defects and contamination risks), this research thesis first gives a large overview of previous works in the fields of silicon electrochemistry in hydrofluoric environment, of silicon chemical condition after treatment by a diluted hydrofluoric acid, of metallic contamination of silicon during cleaning with a diluted hydrofluoric acid, and of theoretical models of interpretation. Then, the author reports the development of a new electrochemical cell, and the detailed study of mono-crystalline silicon in a diluted hydrofluoric environment (electrochemical investigation, modelling of charge transfer at the interface, studies by atomic force microscopy, contamination of silicon by copper)

  12. Physical models for high burnup fuel

    International Nuclear Information System (INIS)

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  13. String model of black hole microstates

    International Nuclear Information System (INIS)

    Larsen, F.

    1997-01-01

    The statistical mechanics of black holes arbitrarily far from extremality is modeled by a gas of weakly interacting strings. As an effective low-energy description of black holes the string model provides several highly nontrivial consistency checks and predictions. Speculations on a fundamental origin of the model suggest surprising simplifications in nonperturbative string theory, even in the absence of supersymmetry. copyright 1997 The American Physical Society

  14. Business Process Modeling: Perceived Benefits

    Science.gov (United States)

    Indulska, Marta; Green, Peter; Recker, Jan; Rosemann, Michael

    The process-centered design of organizations and information systems is globally seen as an appropriate response to the increased economic pressure on organizations. At the methodological core of process-centered management is process modeling. However, business process modeling in large initiatives can be a time-consuming and costly exercise, making it potentially difficult to convince executive management of its benefits. To date, and despite substantial interest and research in the area of process modeling, the understanding of the actual benefits of process modeling in academia and practice is limited. To address this gap, this paper explores the perception of benefits derived from process modeling initiatives, as reported through a global Delphi study. The study incorporates the views of three groups of stakeholders - academics, practitioners and vendors. Our findings lead to the first identification and ranking of 19 unique benefits associated with process modeling. The study in particular found that process modeling benefits vary significantly between practitioners and academics. We argue that the variations may point to a disconnect between research projects and practical demands.

  15. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  16. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  17. WWTP Process Tank Modelling

    DEFF Research Database (Denmark)

    Laursen, Jesper

    The present thesis considers numerical modeling of activated sludge tanks on municipal wastewater treatment plants. Focus is aimed at integrated modeling where the detailed microbiological model the Activated Sludge Model 3 (ASM3) is combined with a detailed hydrodynamic model based on a numerical...... solution of the Navier-Stokes equations in a multiphase scheme. After a general introduction to the activated sludge tank as a system, the activated sludge tank model is gradually setup in separate stages. The individual sub-processes that are often occurring in activated sludge tanks are initially...... hydrofoil shaped propellers. These two sub-processes deliver the main part of the supplied energy to the activated sludge tank, and for this reason they are important for the mixing conditions in the tank. For other important processes occurring in the activated sludge tank, existing models and measurements...

  18. A model to predict element redistribution in unsaturated soil: Its simplification and validation

    International Nuclear Information System (INIS)

    Sheppard, M.I.; Stephens, M.E.; Davis, P.A.; Wojciechowski, L.

    1991-01-01

    A research model has been developed to predict the long-term fate of contaminants entering unsaturated soil at the surface through irrigation or atmospheric deposition, and/or at the water table through groundwater. The model, called SCEMR1 (Soil Chemical Exchange and Migration of Radionuclides, Version 1), uses Darcy's law to model water movement, and the soil solid/liquid partition coefficient, K d , to model chemical exchange. SCEMR1 has been validated extensively on controlled field experiments with several soils, aeration statuses and the effects of plants. These validation results show that the model is robust and performs well. Sensitivity analyses identified soil K d , annual effective precipitation, soil type and soil depth to be the four most important model parameters. SCEMR1 consumes too much computer time for incorporation into a probabilistic assessment code. Therefore, we have used SCEMR1 output to derive a simple assessment model. The assessment model reflects the complexity of its parent code, and provides a more realistic description of containment transport in soils than would a compartment model. Comparison of the performance of the SCEMR1 research model, the simple SCEMR1 assessment model and the TERRA compartment model on a four-year soil-core experiment shows that the SCEMR1 assessment model generally provides conservative soil concentrations. (15 refs., 3 figs.)

  19. Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes.

    Science.gov (United States)

    Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Oscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín

    2008-07-15

    Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals.

  20. modeling grinding modeling grinding processes as micro processes

    African Journals Online (AJOL)

    eobe

    industrial precision grinding processes are cylindrical, center less and ... Several model shave been proposed and used to study grinding ..... grinding force for the two cases were 9.07237N/mm ..... International Journal of Machine Tools &.

  1. Integrative device and process of oxidization, degassing, acidity adjustment of 1BP from APOR process

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Chen; Zheng, Weifang, E-mail: wfazh@ciae.ac.cn; Yan, Taihong; He, Hui; Li, Gaoliang; Chang, Shangwen; Li, Chuanbo; Yuan, Zhongwei

    2016-02-15

    Graphical abstract: Previous (left) and present (right) device of oxidation, degassing, acidity adjustment of 1BP. - Highlights: • We designed an integrative device and process. • The utilization efficiency of N{sub 2}O{sub 4} is increased significantly. • Our work results in considerable simplification of the device. • Process parameters are determined by experiments. - Abstract: Device and process of oxidization, degassing, acidity adjustment of 1BP (The Pu production feed from U/Pu separation section) from APOR process (Advanced Purex Process based on Organic Reductants) were improved through rational design and experiments. The device was simplified and the process parameters, such as feed position and flow ratio, were determined by experiments. Based on this new device and process, the reductants N,N-dimethylhydroxylamine (DMHAN) and methylhydrazine (MMH) in 1BP solution could be oxidized with much less N{sub 2}O{sub 4} consumption.

  2. Theoretical studies on the free-blowing process of the vent pipes in the pressure suppression system

    International Nuclear Information System (INIS)

    Aust, E.

    1979-01-01

    This report deals with the free-blowing period of the vent pipes in a pressure suppression system, treating especially the analytical simulation of this period. In order to do this the equations presently laid down for use in the computer codes and describing the free-blowing process are presented and their essential model simplifications are worked out. Subsequently, for the simple system of a U-tube filled with water, the fundamental mechanism of the free-blowing process is studied and the results to be transferred from it to the pressure suppression system are discussed. Based on them there are performed calculations with the individual free-blowing models using plant-related parameters as input data, in order to verify in a benchmark comparison the information and the results of calculation of the individual model concepts in relation to one another. Finally the validity of the individual model set-ups and the quality of their predictions are checked by means of experimental results from the pressure suppression experiments on the PSS test stand. (orig.) [de

  3. Processing facility for metal waste

    International Nuclear Information System (INIS)

    Awano, Toshihiko; Kataoka, Yoshitsune.

    1998-01-01

    Each steps of temporarily storing materials to be reduced in the volume to a storage vessel, transferring them to a weighing machine by a conveyor, weighing them by a weighing machine, drying them by a drying means, packing them in containing canisters, sealing and welding them, carrying out the containing canisters after sealing are conducted independently respectively or optionally simultaneously in parallel. Accordingly, isolation from peripheral circumstances is ensured, and improvement of working efficiency, ensuring of safety and simplification of structure of processing devices can be attained. (T.M.)

  4. Characterizing and modeling the pressure- and rate-dependent elastic-plastic-damage behaviors of polypropylene-based polymers

    KAUST Repository

    Pulungan, Ditho Ardiansyah; Yudhanto, Arief; Goutham, Shiva; Lubineau, Gilles; Yaldiz, Recep; Schijve, Warden

    2018-01-01

    Polymers in general exhibit pressure- and rate-dependent behavior. Modeling such behavior requires extensive, costly and time-consuming experimental work. Common simplifications may lead to severe inaccuracy when using the model for predicting

  5. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  6. Investigating the Process of Process Modeling with Eye Movement Analysis

    OpenAIRE

    Pinggera, Jakob; Furtner, Marco; Martini, Markus; Sachse, Pierre; Reiter, Katharina; Zugal, Stefan; Weber, Barbara

    2015-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models by analyzing the modeler's interactions with the modeling environment. In this paper we aim to complement previous insights on the modeler's modeling behavior with data gathered by tracking the modeler's eye movements when engaged in the act of modeling. We present preliminary results and outline directions for future research to triangulate toward a more comprehensive un...

  7. What makes process models understandable?

    NARCIS (Netherlands)

    Mendling, J.; Reijers, H.A.; Cardoso, J.; Alonso, G.; Dadam, P.; Rosemann, M.

    2007-01-01

    Despite that formal and informal quality aspects are of significant importance to business process modeling, there is only little empirical work reported on process model quality and its impact factors. In this paper we investigate understandability as a proxy for quality of process models and focus

  8. Modeling the Object-Oriented Software Process: OPEN and the Unified Process

    NARCIS (Netherlands)

    van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.

    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is

  9. Modeling the Object-Oriented Software Process: OPEN and the Unified Process

    OpenAIRE

    van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.

    1999-01-01

    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is reviewed.

  10. Microscopic modelling of doped manganites

    International Nuclear Information System (INIS)

    Weisse, Alexander; Fehske, Holger

    2004-01-01

    Colossal magneto-resistance manganites are characterized by a complex interplay of charge, spin, orbital and lattice degrees of freedom. Formulating microscopic models for these compounds aims at meeting two conflicting objectives: sufficient simplification without excessive restrictions on the phase space. We give a detailed introduction to the electronic structure of manganites and derive a microscopic model for their low-energy physics. Focusing on short-range electron-lattice and spin-orbital correlations we supplement the modelling with numerical simulations

  11. Novel process windows for enabling, accelerating, and uplifting flow chemistry.

    Science.gov (United States)

    Hessel, Volker; Kralisch, Dana; Kockmann, Norbert; Noël, Timothy; Wang, Qi

    2013-05-01

    Novel Process Windows make use of process conditions that are far from conventional practices. This involves the use of high temperatures, high pressures, high concentrations (solvent-free), new chemical transformations, explosive conditions, and process simplification and integration to boost synthetic chemistry on both the laboratory and production scale. Such harsh reaction conditions can be safely reached in microstructured reactors due to their excellent transport intensification properties. This Review discusses the different routes towards Novel Process Windows and provides several examples for each route grouped into different classes of chemical and process-design intensification. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Adding value to the decision-making process of mega projects : Fostering strategic ambiguity, redundancy, and resilience

    NARCIS (Netherlands)

    Giezen, Mendel; Salet, Willem; Bertolini, Luca

    2015-01-01

    Current practice in decision-making about mega projects seems to be aimed at reducing complexity by simplification. However, this is often detrimental to the resilience and added value of these projects. This article uses the concept of strategic capacity for analyzing the decision-making process on

  13. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    are affected (in a positive or negative way) by the presence of the other enzymes and compounds in the media. In this thesis the concept of multi-enzyme in-pot term is adopted for processes that are carried out by the combination of enzymes in a single reactor and implemented at pilot or industrial scale...... features of the process and provides the information required to structure the process model by using a step-by-step procedure with the required tools and methods. In this way, this framework increases efficiency of the model development process with respect to time and resources needed (fast and effective....... In this way the model parameters that drives the main dynamic behavior can be identified and thus a better understanding of this type of processes. In order to develop, test and verify the methodology, three case studies were selected, specifically the bi-enzyme process for the production of lactobionic acid...

  14. Process model repositories and PNML

    NARCIS (Netherlands)

    Hee, van K.M.; Post, R.D.J.; Somers, L.J.A.M.; Werf, van der J.M.E.M.; Kindler, E.

    2004-01-01

    Bringing system and process models together in repositories facilitates the interchange of model information between modelling tools, and allows the combination and interlinking of complementary models. Petriweb is a web application for managing such repositories. It supports hierarchical process

  15. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  16. Simplification of neural network model for predicting local power distributions of BWR fuel bundle using learning algorithm with forgetting

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinfuku, Kimihiro; Nakamae, Takuji; Nishide, Fusayo.

    1995-01-01

    Previously a two-layered neural network model was developed to predict the relation between fissile enrichment of each fuel rod and local power distribution in a BWR fuel bundle. This model was obtained intuitively based on 33 patterns of training signals after an intensive survey of the models. Recently, a learning algorithm with forgetting was reported to simplify neural network models. It is an interesting subject what kind of model will be obtained if this algorithm is applied to the complex three-layered model which learns the same training signals. A three-layered model which is expanded to have direct connections between the 1st and the 3rd layer elements has been constructed and the learning method of normal back propagation was applied first to this model. The forgetting algorithm was then added to this learning process. The connections concerned with the 2nd layer elements disappeared and the 2nd layer has become unnecessary. It took a longer computing time by an order to learn the same training signals than the simple back propagation, but the two-layered model was obtained autonomously from the expanded three-layered model. (author)

  17. Simulating Mercury And Methyl Mercury Stream Concentrations At Multiple Scales in a Wetland Influenced Coastal Plain Watershed (McTier Creek, SC, USA)

    Science.gov (United States)

    Use of Mechanistic Models to?Improve Understanding: Differential, mass balance, process-based Spatial and temporal resolution Necessary simplifications of system complexity Combing field monitoring and modeling efforts Balance between capturing complexity and maintaining...

  18. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification.

    Science.gov (United States)

    Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and

  19. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    Science.gov (United States)

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  20. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about...... of the temperature on the mechanical resonance frequency is considered and thereby integrated in the final model for long term operations....

  1. Emergency planning simplification: Why ALWR designs shall support this goal

    International Nuclear Information System (INIS)

    Tripputi, I.

    2004-01-01

    Emergency Plan simplification, could be achieved only if it can proved, in a context of balanced national health protection policies, that there is a reduced or no technical need for some elements of it and that public protection is assured in all considered situations regardless of protective actions outside the plant. These objectives may be technically supported if one or more of the following conditions are complied with: 1. Accidents potentially releasing large amounts of fission products can be ruled out by characteristics of the designs 2. Plant engineered features (and the containment system in particular) are able to drastically mitigate the radioactive releases under all conceivable scenarios. 3. A realistic approach to the consequence evaluation can reduce the expected consequences to effects below any concern. Unfortunately no one single approach is either technically feasible or justified in a perspective of defense in depth and only a mix of them may provide the necessary conditions. It appears that most or all proposed ALWR designs address the technical issues, whose solutions are the bases to eliminate the need for a number of protective actions (evacuation, relocation, sheltering, iodine tablets administration, etc.) even in the case of a severe accident. Some designs are mainly oriented to prevent the need for short term protective actions; they credit simplified Emergency Plans or the capabilities of existing civil protection organizations for public relocation in the long term, if needed. Others take also into account the overall releases to exclude or minimize public relocation and land contamination. Design targets for population individual doses and for land contamination proposed in Italy are discussed in the paper. It is also shown that these limits, while challenging, appear to be within the reach of the next generation proposed designs currently studied in Italy. (author)

  2. Developing engineering processes through integrated modelling of product and process

    DEFF Research Database (Denmark)

    Nielsen, Jeppe Bjerrum; Hvam, Lars

    2012-01-01

    This article aims at developing an operational tool for integrated modelling of product assortments and engineering processes in companies making customer specific products. Integrating a product model in the design of engineering processes will provide a deeper understanding of the engineering...... activities as well as insight into how product features affect the engineering processes. The article suggests possible ways of integrating models of products with models of engineering processes. The models have been tested and further developed in an action research study carried out in collaboration...... with a major international engineering company....

  3. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  4. Automatic extraction of process categories from process model collections

    NARCIS (Netherlands)

    Malinova, M.; Dijkman, R.M.; Mendling, J.; Lohmann, N.; Song, M.; Wohed, P.

    2014-01-01

    Many organizations build up their business process management activities in an incremental way. As a result, there is no overarching structure defined at the beginning. However, as business process modeling initiatives often yield hundreds to thousands of process models, there is a growing need for

  5. Biosphere Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  6. Biosphere Process Model Report

    International Nuclear Information System (INIS)

    Schmitt, J.

    2000-01-01

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  7. Conceptual models of information processing

    Science.gov (United States)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  8. Simplifications of Einstein supergravity

    International Nuclear Information System (INIS)

    Ferrara, S.; van Nieuwenhuizen, P.

    1979-01-01

    Using a new symmetry of the Einstein supergravity action and defining a new spin connection, the axial-vector auxiliary field cancels in the gauge action and in the gauge algebra. This explains why in some models a first-order formalism with minimal coupling of the spin connection and tensor calculus agree, while in other models only the tensor calculus gives the correct result but torsion does not

  9. Effects of simplifying fracture network representation on inert chemical migration in fracture-controlled aquifers

    Science.gov (United States)

    Wellman, Tristan; Shapiro, Allen M.; Hill, Mary C.

    2009-01-01

    While it is widely recognized that highly permeable 'large-scale' fractures dominate chemical migration in many fractured aquifers, recent studies suggest that the pervasive 'small-scale' fracturing once considered of less significance can be equally important for characterizing the spatial extent and residence time associated with transport processes. A detailed examination of chemical migration through fracture-controlled aquifers is used to advance this conceptual understanding. The influence of fracture structure is evaluated by quantifying the effects to transport caused by a systematic removal of fractures from three-dimensional discrete fracture models whose attributes are derived from geologic and hydrologic conditions at multiple field sites. Results indicate that the effects to transport caused by network simplification are sensitive to the fracture network characteristics, degree of network simplification, and plume travel distance, but primarily in an indirect sense since correlation to individual attributes is limited. Transport processes can be 'enhanced' or 'restricted' from network simplification meaning that the elimination of fractures may increase or decrease mass migration, mean travel time, dispersion, and tailing of the concentration plume. The results demonstrate why, for instance, chemical migration may not follow the classic advection-dispersion equation where dispersion approximates the effect of the ignored geologic structure as a strictly additive process to the mean flow. The analyses further reveal that the prediction error caused by fracture network simplification is reduced by at least 50% using the median estimate from an ensemble of simplified fracture network models, and that the error from network simplification is at least 70% less than the stochastic variability from multiple realizations. Copyright 2009 by the American Geophysical Union.

  10. Improved model management with aggregated business process models

    NARCIS (Netherlands)

    Reijers, H.A.; Mans, R.S.; Toorn, van der R.A.

    2009-01-01

    Contemporary organizations invest much efforts in creating models of their business processes. This raises the issue of how to deal with large sets of process models that become available over time. This paper proposes an extension of Event-driven Process Chains, called the aggregate EPC (aEPC),

  11. Computerized models : tools for assessing the future of complex systems?

    NARCIS (Netherlands)

    Ittersum, van M.K.; Sterk, B.

    2015-01-01

    Models are commonly used to make decisions. At some point all of us will have employed a mental model, that is, a simplification of reality, in an everyday situation. For instance, when we want to make the best decision for the environment and consider whether to buy our vegetables in a large

  12. Business process modeling in healthcare.

    Science.gov (United States)

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  13. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  14. Integrating textual and model-based process descriptions for comprehensive process search

    NARCIS (Netherlands)

    Leopold, Henrik; van der Aa, Han; Pittke, Fabian; Raffel, Manuel; Mendling, Jan; Reijers, Hajo A.

    2016-01-01

    Documenting business processes using process models is common practice in many organizations. However, not all process information is best captured in process models. Hence, many organizations complement these models with textual descriptions that specify additional details. The problem with this

  15. Process and system - A dual definition, revisited with consequences in metrology

    Science.gov (United States)

    Ruhm, K. H.

    2010-07-01

    Lets assert that metrology life could be easier scientifically as well as technologically, if we, intentionally, would make an explicit distinction between two outstanding domains, namely the given, really existent domain of processes and the just virtually existent domain of systems, the latter of which is designed and used by the human mind. The abstract domain of models, by which we map the manifold reality of processes, is itself part of the domain of systems. Models support comprehension and communication, although they are normally extreme simplifications of properties and behaviour of a concrete reality. So, systems and signals represent processes and quantities, which are described by means of Signal and System Theory as well as by Stochastics and Statistics. The following presentation of this new, demanding and somehow irritating definition of the terms process and system as a dual pair is unusual indeed, but it opens the door widely to a better and more consistent discussion and understanding of manifold scientific tools in many areas. Metrology [4] is one of the important fields of concern due to many reasons: One group of the soft and hard links between the domain of processes and the domain of systems is realised by concepts of measurement science on the one hand and by instrumental tools of measurement technology on the other hand.

  16. Karst Aquifer Recharge: A Case History of over Simplification from the Uley South Basin, South Australia

    Directory of Open Access Journals (Sweden)

    Nara Somaratne

    2015-02-01

    Full Text Available The article “Karst aquifer recharge: Comments on ‘Characteristics of Point Recharge in Karst Aquifers’, by Adrian D. Werner, 2014, Water 6, doi:10.3390/w6123727” provides misrepresentation in some parts of Somaratne [1]. The description of Uley South Quaternary Limestone (QL as unconsolidated or poorly consolidated aeolianite sediments with the presence of well-mixed groundwater in Uley South [2] appears unsubstantiated. Examination of 98 lithological descriptions with corresponding drillers’ logs show only two wells containing bands of unconsolidated sediments. In Uley South basin, about 70% of salinity profiles obtained by electrical conductivity (EC logging from monitoring wells show stratification. The central and north central areas of the basin receive leakage from the Tertiary Sand (TS aquifer thereby influencing QL groundwater characteristics, such as chemistry, age and isotope composition. The presence of conduit pathways is evident in salinity profiles taken away from TS water affected areas. Pumping tests derived aquifer parameters show strong heterogeneity, a typical characteristic of karst aquifers. Uley South QL aquifer recharge is derived from three sources; diffuse recharge, point recharge from sinkholes and continuous leakage of TS water. This limits application of recharge estimation methods, such as the conventional chloride mass balance (CMB as the basic premise of the CMB is violated. The conventional CMB is not suitable for accounting chloride mass balance in groundwater systems displaying extreme range of chloride concentrations and complex mixing [3]. Over simplification of karst aquifer systems to suit application of the conventional CMB or 1-D unsaturated modelling as described in Werner [2], is not suitable use of these recharge estimation methods.

  17. Process modeling for Humanities: tracing and analyzing scientific processes

    OpenAIRE

    Hug , Charlotte; Salinesi , Camille; Deneckere , Rebecca; Lamasse , Stéphane

    2011-01-01

    International audience; This paper concerns epistemology and the understanding of research processes in Humanities, such as Archaeology. We believe that to properly understand research processes, it is essential to trace them. The collected traces depend on the process model established, which has to be as accurate as possible to exhaustively record the traces. In this paper, we briefly explain why the existing process models for Humanities are not sufficient to represent traces. We then pres...

  18. Modeling nuclear processes by Simulink

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my [Faculty of Engineering, International Islamic University Malaysia, Jalan Gombak, Selangor (Malaysia)

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  19. Modeling nuclear processes by Simulink

    International Nuclear Information System (INIS)

    Rashid, Nahrul Khair Alang Md

    2015-01-01

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples

  20. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    Science.gov (United States)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-02-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  1. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    -change of a homogeneous Levy process. While the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on the single names included in the iTraxx Europe index. The performances are compared......In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...... specifying the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore we also analyze specifications obtained via a simple deterministic time...

  2. Numerical Coupling of the Particulate Phase to the Plasma Phase in Modeling of Multi-Arc Plasma Spraying

    International Nuclear Information System (INIS)

    Bobzin, K.; Öte, M.

    2017-01-01

    Inherent to Euler-Lagrange formulation, which can be used in order to describe the particle behavior in plasma spraying, particle in-flight characteristics are determined by calculating the impulse, heat and mass transfer between the plasma jet and individual powder particles. Based on the assumption that the influence of the particulate phase on the fluid phase is insignificant, impulse, heat and mass transfer from particles to the plasma jet can be neglected using the so-called numerical approach of “one-way coupling”. On the other hand, so-called “two-way coupling” considers the two-sided transfer between both phases. The former is a common simplification used in the literature to describe the plasma-particle interaction in thermal spraying. This study focuses on the significance of this simplification on the calculated results and shows that the use of this simplification leads to significant errors in calculated plasma and particle in-flight characteristics in three-cathode plasma spraying process. (paper)

  3. Multiphysics modelling of manufacturing processes: A review

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Baran, Ismet; Mohanty, Sankhya

    2018-01-01

    Numerical modelling is increasingly supporting the analysis and optimization of manufacturing processes in the production industry. Even if being mostly applied to multistep processes, single process steps may be so complex by nature that the needed models to describe them must include multiphysics...... the diversity in the field of modelling of manufacturing processes as regards process, materials, generic disciplines as well as length scales: (1) modelling of tape casting for thin ceramic layers, (2) modelling the flow of polymers in extrusion, (3) modelling the deformation process of flexible stamps...... for nanoimprint lithography, (4) modelling manufacturing of composite parts and (5) modelling the selective laser melting process. For all five examples, the emphasis is on modelling results as well as describing the models in brief mathematical details. Alongside with relevant references to the original work...

  4. Modelling and analysis of solar cell efficiency distributions

    Science.gov (United States)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  5. Configurable multi-perspective business process models

    NARCIS (Netherlands)

    La Rosa, M.; Dumas, M.; Hofstede, ter A.H.M.; Mendling, J.

    2011-01-01

    A configurable process model provides a consolidated view of a family of business processes. It promotes the reuse of proven practices by providing analysts with a generic modeling artifact from which to derive individual process models. Unfortunately, the scope of existing notations for

  6. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  7. Perspective of the waste management research program at NRC on modeling phenomena related to the disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Randall, J.D.; Costanzi, F.A.

    1985-01-01

    Modeling the geologic disposal of high-level radioactive waste falls short of ideal for a variety of reasons. The understanding of the physical processes involved may be incomplete or incorrect. It may not be possible to specify mathematically all relationships among the processes involved. The initial conditions or boundary conditions may not be known or directly measurable. Further, often it is impossible to obtain exact solutions to the mathematical relationships that constitute the mathematical model. Finally, many simplifications, approximations, and assumptions will be needed to make the models both understandable and calculationally tractable. Yet, modeling is the only means available by which any quantitative estimation of the expected performance of a geologic repository over the long term can be made. If modeling estimates of the performance of a geologic repository are to provide effective support for an NRC finding of reasonable assurance of no unreasonable risk to the public health and safety, then the strengths and limitations of the modeling process, the models themselves, and the use of the models must be understood and explored fully

  8. A model for steady-state HNF combustion

    Energy Technology Data Exchange (ETDEWEB)

    Louwers, J.; Gadiot, G.M.H.J.L. [TNO Prins Maurits Lab., Rijswijk (Netherlands); Brewster, M.Q. [Univ. of Illinois, Urbana, IL (United States); Son, S.F. [Los Alamos National Lab., NM (United States)

    1997-09-01

    A simple model for the combustion of solid monopropellants is presented. The condensed phase is treated by high activation energy asymptotics. The gas phase is treated by two limit cases: high activation energy, and low activation energy. This results in simplification of the gas phase energy equation, making an (approximate) analytical solution possible. The results of the model are compared with experimental results of Hydrazinium Nitroformate (HNF) combustion.

  9. Syntax highlighting in business process models

    NARCIS (Netherlands)

    Reijers, H.A.; Freytag, T.; Mendling, J.; Eckleder, A.

    2011-01-01

    Sense-making of process models is an important task in various phases of business process management initiatives. Despite this, there is currently hardly any support in business process modeling tools to adequately support model comprehension. In this paper we adapt the concept of syntax

  10. Genetic Process Mining: Alignment-based Process Model Mutation

    NARCIS (Netherlands)

    Eck, van M.L.; Buijs, J.C.A.M.; Dongen, van B.F.; Fournier, F.; Mendling, J.

    2015-01-01

    The Evolutionary Tree Miner (ETM) is a genetic process discovery algorithm that enables the user to guide the discovery process based on preferences with respect to four process model quality dimensions: replay fitness, precision, generalization and simplicity. Traditionally, the ETM algorithm uses

  11. Cluster model of s- and p-shell ΛΛ hypernuclei

    Indian Academy of Sciences (India)

    simplifications the use of cluster model to S = −2 systems has given ..... constructed from Nijmegen soft-core NSC97e potential and are denoted as V e1. ΛΛ ..... This convergence of results reinforces the confidence in the methodology of all the.

  12. The triconnected abstraction of process models

    OpenAIRE

    Polyvyanyy, Artem; Smirnov, Sergey; Weske, Mathias

    2009-01-01

    Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions

  13. Modeling Suspension and Continuation of a Process

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2012-04-01

    Full Text Available This work focuses on difficulties an analyst encounters when modeling suspension and continuation of a process in contemporary process modeling languages. As a basis there is introduced general lifecycle of an activity which is then compared to activity lifecycles supported by individual process modeling languages. The comparison shows that the contemporary process modeling languages cover the defined general lifecycle of an activity only partially. There are picked two popular process modeling languages and there is modeled real example, which reviews how the modeling languages can get along with their lack of native support of suspension and continuation of an activity. Upon the unsatisfying results of the contemporary process modeling languages in the modeled example, there is presented a new process modeling language which, as demonstrated, is capable of capturing suspension and continuation of an activity in much simpler and precise way.

  14. Modeling urban fire growth

    International Nuclear Information System (INIS)

    Waterman, T.E.; Takata, A.N.

    1983-01-01

    The IITRI Urban Fire Spread Model as well as others of similar vintage were constrained by computer size and running costs such that many approximations/generalizations were introduced to reduce program complexity and data storage requirements. Simplifications were introduced both in input data and in fire growth and spread calculations. Modern computational capabilities offer the means to introduce greater detail and to examine its practical significance on urban fire predictions. Selected portions of the model are described as presently configured, and potential modifications are discussed. A single tract model is hypothesized which permits the importance of various model details to be assessed, and, other model applications are identified

  15. Declarative modeling for process supervision

    International Nuclear Information System (INIS)

    Leyval, L.

    1989-01-01

    Our work is a contribution to computer aided supervision of continuous processes. It is inspired by an area of Artificial Intelligence: qualitative physics. Here, supervision is based on a model which continuously provides operators with a synthetic view of the process; but this model is founded on general principles of control theory rather than on physics. It involves concepts such as high gain or small time response. It helps in linking temporally the evolution of various variables. Moreover, the model provides predictions of the future behaviour of the process, which allows action advice and alarm filtering. This should greatly reduce the famous cognitive overload associated to any complex and dangerous evolution of the process

  16. Numerical modelling of the CHEMREC black liquor gasification process. Conceptual design study of the burner in a pilot gasification reactor

    Energy Technology Data Exchange (ETDEWEB)

    Marklund, Magnus

    2001-02-01

    The work presented in this report is done in order to develop a simplified CFD model for Chemrec's pressurised black liquor gasification process. This process is presently under development and will have a number of advantages compared to conventional processes for black liquor recovery. The main goal with this work has been to get qualitative information on influence of burner design for the gas flow in the gasification reactor. Gasification of black liquor is a very complex process. The liquor is composed of a number of different substances and the composition may vary considerably between liquors originating from different mills and even for black liquor from a single process. When a black liquor droplet is gasified it loses its organic material to produce combustible gases by three stages of conversion: Drying, pyrolysis and char gasification. In the end of the conversion only an inorganic smelt remains (ideally). The aim is to get this smelt to form a protective layer, against corrosion and heat, on the reactor walls. Due to the complexity of gasification of black liquor some simplifications had to be made in order to develop a CFD model for the preliminary design of the gasification reactor. Instead of modelling droplets in detail, generating gas by gasification, sources were placed in a prescribed volume where gasification (mainly drying and pyrolysis) of the black liquor droplets was assumed to occur. Source terms for the energy and momentum equations, consistent with the mass source distribution, were derived from the corresponding control volume equations by assuming a symmetric outflow of gas from the droplets and a uniform degree of conversion of reactive components in the droplets. A particle transport model was also used in order to study trajectories from droplets entering the reactor. The resulting model has been implemented in a commercial finite volume code (AEA-CFX) through customised Fortran subroutines. The advantages with this simple

  17. ECONOMIC MODELING PROCESSES USING MATLAB

    Directory of Open Access Journals (Sweden)

    Anamaria G. MACOVEI

    2008-06-01

    Full Text Available To study economic phenomena and processes using mathem atical modeling, and to determine the approximatesolution to a problem we need to choose a method of calculation and a numerical computer program, namely thepackage of programs MatLab. Any economic process or phenomenon is a mathematical description of h is behavior,and thus draw up an economic and mathematical model that has the following stages: formulation of the problem, theanalysis process modeling, the production model and design verification, validation and implementation of the model.This article is presented an economic model and its modeling is using mathematical equations and software packageMatLab, which helps us approximation effective solution. As data entry is considered the net cost, the cost of direct andtotal cost and the link between them. I presented the basic formula for determining the total cost. Economic modelcalculations were made in MatLab software package and with graphic representation of its interpretation of the resultsachieved in terms of our specific problem.

  18. Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics

    Science.gov (United States)

    Marcé, R.; Armengol, J.

    2009-07-01

    One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a deterministic, watershed-scale biogeochemical model. Once the model was calibrated, fitted phosphorus retention metrics where put in context of global patterns of phosphorus retention variability. For this purpose, we calculated power regressions between phosphorus retention metrics, streamflow, and phosphorus concentration in water using published data from 66 streams worldwide, including both pristine and nutrient enriched streams. Performance of the calibrated model confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Thus, this approach may be helpful even for customary deterministic applications working at short time steps. The calibrated phosphorus retention metrics were comparable to field estimates from the study watershed, and showed high coherence with global patterns of retention metrics from streams of the world. In this sense, the fitted phosphorus retention metrics were similar to field values measured in other nutrient enriched streams. Analysis of the bibliographical data supports the view that nutrient enriched streams have lower phosphorus retention efficiency than pristine streams, and that this efficiency loss is maintained in a wide

  19. Thermophysical modeling for high-resolution digital terrain models

    Science.gov (United States)

    Pelivan, I.

    2018-04-01

    A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.

  20. Styles in business process modeling: an exploration and a model

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Fahland, D.; Weidlich, M.; Zugal, S.; Weber, B.; Reijers, H.A.; Mendling, J.

    2015-01-01

    Business process models are an important means to design, analyze, implement, and control business processes. As with every type of conceptual model, a business process model has to meet certain syntactic, semantic, and pragmatic quality requirements to be of value. For many years, such quality

  1. Composing Models of Geographic Physical Processes

    Science.gov (United States)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  2. The effects of modeling simplifications on craniofacial finite element models: the alveoli (tooth sockets) and periodontal ligaments.

    Science.gov (United States)

    Wood, Sarah A; Strait, David S; Dumont, Elizabeth R; Ross, Callum F; Grosse, Ian R

    2011-07-07

    Several finite element models of a primate cranium were used to investigate the biomechanical effects of the tooth sockets and the material behavior of the periodontal ligament (PDL) on stress and strain patterns associated with feeding. For examining the effect of tooth sockets, the unloaded sockets were modeled as devoid of teeth and PDL, filled with teeth and PDLs, or simply filled with cortical bone. The third premolar on the left side of the cranium was loaded and the PDL was treated as an isotropic, linear elastic material using published values for Young's modulus and Poisson's ratio. The remaining models, along with one of the socket models, were used to determine the effect of the PDL's material behavior on stress and strain distributions under static premolar biting and dynamic tooth loading conditions. Two models (one static and the other dynamic) treated the PDL as cortical bone. The other two models treated it as a ligament with isotropic, linear elastic material properties. Two models treated the PDL as a ligament with hyperelastic properties, and the other two as a ligament with viscoelastic properties. Both behaviors were defined using published stress-strain data obtained from in vitro experiments on porcine ligament specimens. Von Mises stress and strain contour plots indicate that the effects of the sockets and PDL material behavior are local. Results from this study suggest that modeling the sockets and the PDL in finite element analyses of skulls is project dependent and can be ignored if values of stress and strain within the alveolar region are not required. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A New Approach in the Simplification of a Multiple-Beam Forming Network Based on CORPS Using Compressive Arrays

    Directory of Open Access Journals (Sweden)

    Armando Arce

    2012-01-01

    Full Text Available This research paper deals with a innovative way to simplify the design of beam-forming networks (BFNs for multibeam steerable antenna arrays based on coherently radiating periodic structures (CORPS technology using the noniterative matrix pencil method (MPM. This design approach is based on the application of the MPM to linear arrays fed by CORPS-BFN configurations to further reduce the complexity of the beam-forming network. Two 2-beam design configurations of CORPS-BFN for a steerable linear array are analyzed and compared using this compressive method. Simulation results show the effectiveness and advantages of applying the MPM on BFNs based on CORPS exploiting the nonuniformity of the antenna elements. Furthermore, final results show that the integration of CORPS-BFN and MPM reduces the entire antenna system including the antenna array and the beam-forming network subsystem resulting in a substantial simplification in such systems.

  4. Development of a simplified fuel-cladding gap conductance model for nuclear feedback calculation in 16x16 FA

    International Nuclear Information System (INIS)

    Yoo, Jong Sung; Park, Chan Oh; Park, Yong Soo

    1995-01-01

    The accurate determination of the fuel-cladding gap conductance as functions of rod burnup and power level may be a key to the design and safety analysis of a reactor. The incorporation of a sophisticated gap conductance model into nuclear design code for computing thermal hydraulic feedback effect has not been implemented mainly because of computational inefficiency due to complicated behavior of gap conductance. To avoid the time-consuming iteration scheme, simplification of the gap conductance model is done for the current design model. The simplified model considers only the heat conductance contribution to the gap conductance. The simplification is made possible by direct consideration of the gap conductivity depending on the composition of constituent gases in the gap and the fuel-cladding gap size from computer simulation of representative power histories. The simplified gap conductance model is applied to the various fuel power histories and the predicted gap conductances are found to agree well with the results of the design model

  5. Mathematical modeling of a biogenous filter cake and identification of oilseed material parameters

    Directory of Open Access Journals (Sweden)

    Očenášek J.

    2009-12-01

    Full Text Available Mathematical modeling of the filtration and extrusion process inside a linear compression chamber has gained a lot of attention during several past decades. This subject was originally related to mechanical and hydraulic properties of soils (in particular work of Terzaghi and later was this approach adopted for the modeling of various technological processes in the chemical industry (work of Shirato. Developed mathematical models of continuum mechanics of porous materials with interstitial fluid were then applied also to the problem of an oilseed expression. In this case, various simplifications and partial linearizations are introduced in models for the reason of an analytical or numerical solubility; or it is not possible to generalize the model formulation into the fully 3D problem of an oil expression extrusion with a complex geometry such as it has a screw press extruder.We proposed a modified model for the oil seeds expression process in a linear compression chamber. The model accounts for the rheological properties of the deformable solid matrix of compressed seed, where the permeability of the porous solid is described by the Darcy's law. A methodology of the experimental work necessary for a material parameters identification is presented together with numerical simulation examples.

  6. Modelling of radionuclide transport in forests: Review and future perspectives

    International Nuclear Information System (INIS)

    Shaw, G.; Schell, W.; Linkov, I.

    1997-01-01

    Ecological modeling is a powerful tool which can be used to synthesize information on the dynamic processes which occur in ecosystems. Models of radionuclide transport in forests were first constructed in the mid-1960's, when the consequences of global fallout from nuclear weapons tests and waste disposal in the environment were of great concern. Such models were developed based on site-specific experimental data and were designed to address local needs. These models had a limited applicability in evaluating distinct ecosystems and deposition scenarios. Given the scarcity of information, the same experimental data sets were often used both for model calibration and validation, an approach which clearly constitutes a methodological error. Even though the carry modeling attempts were far from being faultless, they established a useful conceptual approach in that they tried to capture general processes in ecosystems and thus had a holistic nature. Later, radioecological modeling attempted to reveal ecosystem properties by separating the component parts from the whole system, as an approach to simplification. This method worked well for radionuclide transport in agricultural ecosystems, in which the biogeochemistry of radionuclide cycling is relatively well understood and can be influenced by fertilization. Several models have been successfully developed and applied to human dose evaluation and emergency response to contaminating events in agricultural lands

  7. The Structured Process Modeling Theory (SPMT) : a cognitive view on why and how modelers benefit from structuring the process of process modeling

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Gailly, F.; Grefen, P.W.P.J.; Poels, G.

    2015-01-01

    After observing various inexperienced modelers constructing a business process model based on the same textual case description, it was noted that great differences existed in the quality of the produced models. The impression arose that certain quality issues originated from cognitive failures

  8. Integrated modelling in materials and process technology

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri

    2008-01-01

    Integrated modelling of entire process sequences and the subsequent in-service conditions, and multiphysics modelling of the single process steps are areas that increasingly support optimisation of manufactured parts. In the present paper, three different examples of modelling manufacturing...... processes from the viewpoint of combined materials and process modelling are presented: solidification of thin walled ductile cast iron, integrated modelling of spray forming and multiphysics modelling of friction stir welding. The fourth example describes integrated modelling applied to a failure analysis...

  9. Potential of Laboratory Execution Systems (LESs) to Simplify the Application of Business Process Management Systems (BPMSs) in Laboratory Automation.

    Science.gov (United States)

    Neubert, Sebastian; Göde, Bernd; Gu, Xiangyu; Stoll, Norbert; Thurow, Kerstin

    2017-04-01

    Modern business process management (BPM) is increasingly interesting for laboratory automation. End-to-end workflow automation and improved top-level systems integration for information technology (IT) and automation systems are especially prominent objectives. With the ISO Standard Business Process Model and Notation (BPMN) 2.X, a system-independent and interdisciplinary accepted graphical process control notation is provided, allowing process analysis, while also being executable. The transfer of BPM solutions to structured laboratory automation places novel demands, for example, concerning the real-time-critical process and systems integration. The article discusses the potential of laboratory execution systems (LESs) for an easier implementation of the business process management system (BPMS) in hierarchical laboratory automation. In particular, complex application scenarios, including long process chains based on, for example, several distributed automation islands and mobile laboratory robots for a material transport, are difficult to handle in BPMSs. The presented approach deals with the displacement of workflow control tasks into life science specialized LESs, the reduction of numerous different interfaces between BPMSs and subsystems, and the simplification of complex process modelings. Thus, the integration effort for complex laboratory workflows can be significantly reduced for strictly structured automation solutions. An example application, consisting of a mixture of manual and automated subprocesses, is demonstrated by the presented BPMS-LES approach.

  10. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    Science.gov (United States)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling. PMID:27157858

  11. A Nonlinear Ship Manoeuvering Model: Identification and adaptive control with experiments for a model ship

    Directory of Open Access Journals (Sweden)

    Roger Skjetne

    2004-01-01

    Full Text Available Complete nonlinear dynamic manoeuvering models of ships, with numerical values, are hard to find in the literature. This paper presents a modeling, identification, and control design where the objective is to manoeuver a ship along desired paths at different velocities. Material from a variety of references have been used to describe the ship model, its difficulties, limitations, and possible simplifications for the purpose of automatic control design. The numerical values of the parameters in the model is identified in towing tests and adaptive manoeuvering experiments for a small ship in a marine control laboratory.

  12. Detecting Difference between Process Models Based on the Refined Process Structure Tree

    Directory of Open Access Journals (Sweden)

    Jing Fan

    2017-01-01

    Full Text Available The development of mobile workflow management systems (mWfMS leads to large number of business process models. In the meantime, the location restriction embedded in mWfMS may result in different process models for a single business process. In order to help users quickly locate the difference and rebuild the process model, detecting the difference between different process models is needed. Existing detection methods either provide a dissimilarity value to represent the difference or use predefined difference template to generate the result, which cannot reflect the entire composition of the difference. Hence, in this paper, we present a new approach to solve this problem. Firstly, we parse the process models to their corresponding refined process structure trees (PSTs, that is, decomposing a process model into a hierarchy of subprocess models. Then we design a method to convert the PST to its corresponding task based process structure tree (TPST. As a consequence, the problem of detecting difference between two process models is transformed to detect difference between their corresponding TPSTs. Finally, we obtain the difference between two TPSTs based on the divide and conquer strategy, where the difference is described by an edit script and we make the cost of the edit script close to minimum. The extensive experimental evaluation shows that our method can meet the real requirements in terms of precision and efficiency.

  13. Strategies to Automatically Derive a Process Model from a Configurable Process Model Based on Event Data

    Directory of Open Access Journals (Sweden)

    Mauricio Arriagada-Benítez

    2017-10-01

    Full Text Available Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms and a greedy heuristic that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework and tested in both business-like event logs as recorded in a higher educational enterprise resource planning system and a real case scenario involving a set of Dutch municipalities.

  14. Repairing process models to reflect reality

    NARCIS (Netherlands)

    Fahland, D.; Aalst, van der W.M.P.; Barros, A.; Gal, A.; Kindler, E.

    2012-01-01

    Process mining techniques relate observed behavior (i.e., event logs) to modeled behavior (e.g., a BPMN model or a Petri net). Processes models can be discovered from event logs and conformance checking techniques can be used to detect and diagnose differences between observed and modeled behavior.

  15. Performance characterization of hydrogen isotope exchange and recombination catalysts for tritium processing

    International Nuclear Information System (INIS)

    Suppiah, S.; Ryland, D.; Marcinkowska, K.; Boniface, H.; Everatt, A.

    2010-01-01

    AECL's hydrogen isotope exchange catalyst and recombination catalysts have been successfully applied to a wide range of industrial tritium-removal applications. The catalysts are used for Liquid Phase Catalytic Exchange (LPCE) and for gas-phase and trickle-bed recombination of hydrogen isotopes and have led to process simplification, improved safety and operational advantages. Catalyst performance design equations derived from laboratory testing of these catalysts have been validated against performance under industrial conditions. In a Combined Electrolysis and Catalytic Exchange (CECE) demonstration plant analyses of LPCE and recombiner efficiency were carried out as a function of catalyst activity over a wide range of operation. A steady-state process simulation used to model and design the hydrogen-water isotopic exchange processes, such as the CECE detritiation plant, was validated using the results of this demonstration. Catalyst development for isotope-exchange and recombination applications has continued over the last decade. As a result, significant improvements in catalyst performance have been achieved for these applications. This paper outlines the uniqueness of AECL's specialized catalysts and process designs for these applications with examples from laboratory and industrial case studies.

  16. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  17. Simulation of a combustion process of a billet reheating furnace; Simulacao do processo de combustao de um forno de reaquecimento de tarugos

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Eduardo Sergio da Silva; Barros, Jose Eduardo Mautone [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Mecanica; Ribeiro, Vicente Aleixo Pinheiro [ArcelorMittal Monlevade, Serra, ES (Brazil); Moura Junior, Jose dos Reis Vieira de [ArcelorMittal Long Carbon Americas (Luxembourg); Belisario, Leandro Pego [Universidade Federal de Ouro Preto (UFOP), MG (Brazil)

    2010-07-01

    Real data-based energy balances with few simplifications are a powerful tool for furnaces energy performance evaluation, helping technical people to guide efforts in energy consumption issues, and consequently, in a final product cost reduction. This paper presents a methodology to simulate the combustion process in several operational conditions of a walking-hearth reheat furnace for billets in rolling mill facilities. The computational model consists, basically, in a dynamical solution which measured input variables are supplied from the furnaces supervisory and compared to measures by instruments in the system. Finally, it is made a variability analysis of the furnace and heat exchangers efficiencies.. (author)

  18. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  19. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling

    Science.gov (United States)

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2010-01-01

    This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome. PMID:20157639

  20. Behavioral conformance of artifact-centric process models

    NARCIS (Netherlands)

    Fahland, D.; Leoni, de M.; Dongen, van B.F.; Aalst, van der W.M.P.; Abramowicz, W.

    2011-01-01

    The use of process models in business information systems for analysis, execution, and improvement of processes assumes that the models describe reality. Conformance checking is a technique to validate how good a given process model describes recorded executions of the actual process. Recently,

  1. Modelling critical degrees of saturation of porous building materials subjected to freezing

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    of SCR based on fracture mechanics and phase geometry of two-phase materials has been developed.The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Simplifications are made......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... to describe the development of stresses and the pore structure, because a mathematical description of the physical theories explaining the process of freezing of water in porous materials is lacking.Calculations are based on porosity, modulus of elasticity and tensile strength, and parameters characterizing...

  2. Tools for Resilience Management: Multidisciplinary Development of State-and-Transition Models for Northwest Colorado

    Directory of Open Access Journals (Sweden)

    Emily J. Kachergis

    2013-12-01

    Full Text Available Building models is an important way of integrating knowledge. Testing and updating models of social-ecological systems can inform management decisions and, ultimately, improve resilience. We report on the outcomes of a six-year, multidisciplinary model development process in the sagebrush steppe, USA. We focused on creating state-and-transition models (STMs, conceptual models of ecosystem change that represent nonlinear dynamics and are being adopted worldwide as tools for managing ecosystems. STM development occurred in four steps with four distinct sets of models: (1 local knowledge elicitation using semistructured interviews; (2 ecological data collection using an observational study; (3 model integration using participatory workshops; and (4 model simplification upon review of the literature by a multidisciplinary team. We found that different knowledge types are ultimately complementary. Many of the benefits of the STM-building process flowed from the knowledge integration steps, including improved communication, identification of uncertainties, and production of more broadly credible STMs that can be applied in diverse situations. The STM development process also generated hypotheses about sagebrush steppe dynamics that could be tested by future adaptive management and research. We conclude that multidisciplinary development of STMs has great potential for producing credible, useful tools for managing resilience of social-ecological systems. Based on this experience, we outline a streamlined, participatory STM development process that integrates multiple types of knowledge and incorporates adaptive management.

  3. Cost Models for MMC Manufacturing Processes

    Science.gov (United States)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  4. Conceptual modelling of human resource evaluation process

    Directory of Open Access Journals (Sweden)

    Negoiţă Doina Olivia

    2017-01-01

    Full Text Available Taking into account the highly diverse tasks which employees have to fulfil due to complex requirements of nowadays consumers, the human resource within an enterprise has become a strategic element for developing and exploiting products which meet the market expectations. Therefore, organizations encounter difficulties when approaching the human resource evaluation process. Hence, the aim of the current paper is to design a conceptual model of the aforementioned process, which allows the enterprises to develop a specific methodology. In order to design the conceptual model, Business Process Modelling instruments were employed - Adonis Community Edition Business Process Management Toolkit using the ADONIS BPMS Notation. The conceptual model was developed based on an in-depth secondary research regarding the human resource evaluation process. The proposed conceptual model represents a generic workflow (sequential and/ or simultaneously activities, which can be extended considering the enterprise’s needs regarding their requirements when conducting a human resource evaluation process. Enterprises can benefit from using software instruments for business process modelling as they enable process analysis and evaluation (predefined / specific queries and also model optimization (simulations.

  5. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  6. Information-Processing Models and Curriculum Design

    Science.gov (United States)

    Calfee, Robert C.

    1970-01-01

    "This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)

  7. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  8. Download this PDF file

    African Journals Online (AJOL)

    2015-01-07

    Jan 7, 2015 ... 2Hydrology and Water Quality, Agricultural and Biological Engineering Department, ... This general methodology is applied to a reservoir model of the Okavango ... undermines the value of a model for its use in the decision- making process. ... ing of a system, and simplifications inherent to the modelling.

  9. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  10. Model medication management process in Australian nursing homes using business process modeling.

    Science.gov (United States)

    Qian, Siyu; Yu, Ping

    2013-01-01

    One of the reasons for end user avoidance or rejection to use health information systems is poor alignment of the system with healthcare workflow, likely causing by system designers' lack of thorough understanding about healthcare process. Therefore, understanding the healthcare workflow is the essential first step for the design of optimal technologies that will enable care staff to complete the intended tasks faster and better. The often use of multiple or "high risk" medicines by older people in nursing homes has the potential to increase medication error rate. To facilitate the design of information systems with most potential to improve patient safety, this study aims to understand medication management process in nursing homes using business process modeling method. The paper presents study design and preliminary findings from interviewing two registered nurses, who were team leaders in two nursing homes. Although there were subtle differences in medication management between the two homes, major medication management activities were similar. Further field observation will be conducted. Based on the data collected from observations, an as-is process model for medication management will be developed.

  11. Text Simplification Using Consumer Health Vocabulary to Generate Patient-Centered Radiology Reporting: Translation and Evaluation.

    Science.gov (United States)

    Qenam, Basel; Kim, Tae Youn; Carroll, Mark J; Hogarth, Michael

    2017-12-18

    Radiology reporting is a clinically oriented form of documentation that reflects critical information for patients about their health care processes. Realizing its importance, many medical institutions have started providing radiology reports in patient portals. The gain, however, can be limited because of medical language barriers, which require a way for customizing these reports for patients. The open-access, collaborative consumer health vocabulary (CHV) is a terminology system created for such purposes and can be the basis of lexical simplification processes for clinical notes. The aim of this study was to examine the comprehensibility and suitability of CHV in simplifying radiology reports for consumers. This was done by characterizing the content coverage and the lexical similarity between the terms in the reports and the CHV-preferred terms. The overall procedure was divided into the following two main stages: (1) translation and (2) evaluation. The translation process involved using MetaMap to link terms in the reports to CHV concepts. This is followed by replacing the terms with CHV-preferred terms using the concept names and sources table (MRCONSO) in the Unified Medical Language System (UMLS) Metathesaurus. In the second stage, medical terms in the reports and general terms that are used to describe medical phenomena were selected and evaluated by comparing the words in the original reports with the translated ones. The evaluation includes measuring the content coverage, investigating lexical similarity, and finding trends in missing concepts. Of the 792 terms selected from the radiology reports, 695 of them could be mapped directly to CHV concepts, indicating a content coverage of 88.5%. A total of 51 of the concepts (53%, 51/97) that could not be mapped are names of human anatomical structures and regions, followed by 28 anatomical descriptions and pathological variations (29%, 28/97). In addition, 12 radiology techniques and projections represented

  12. Mathematical modeling in biology: A critical assessment

    Energy Technology Data Exchange (ETDEWEB)

    Buiatti, M. [Florence, Univ. (Italy). Dipt. di Biologia Animale e Genetica

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented `lead forward` of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. `Autistic`, monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve `selfish` problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally `top.down` (deductive) and `bottom up` (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples.

  13. Mathematical modeling in biology: A critical assessment

    International Nuclear Information System (INIS)

    Buiatti, M.

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented 'lead forward' of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. 'Autistic', monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve 'selfish' problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally 'top.down' (deductive) and 'bottom up' (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples

  14. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  15. Social software for business process modeling

    NARCIS (Netherlands)

    Koschmider, A.; Song, M.S.; Reijers, H.A.

    2010-01-01

    Formal models of business processes are used for a variety of purposes. But where the elicitation of the characteristics of a business process usually takes place in a collaborative fashion, the building of the final, formal process model is done mostly by a single person. This article presents the

  16. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  17. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  18. Amplitudes for multiphoton quantum processes in linear optics

    International Nuclear Information System (INIS)

    UrIas, Jesus

    2011-01-01

    The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.

  19. Amplitudes for multiphoton quantum processes in linear optics

    Science.gov (United States)

    Urías, Jesús

    2011-07-01

    The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.

  20. Modeling closed nuclear fuel cycles processes

    Energy Technology Data Exchange (ETDEWEB)

    Shmidt, O.V. [A.A. Bochvar All-Russian Scientific Research Institute for Inorganic Materials, Rogova, 5a street, Moscow, 123098 (Russian Federation); Makeeva, I.R. [Zababakhin All-Russian Scientific Research Institute of Technical Physics, Vasiliev street 13, Snezhinsk, Chelyabinsk region, 456770 (Russian Federation); Liventsov, S.N. [Tomsk Polytechnic University, Tomsk, Lenin Avenue, 30, 634050 (Russian Federation)

    2016-07-01

    Computer models of processes are necessary for determination of optimal operating conditions for closed nuclear fuel cycle (NFC) processes. Computer models can be quickly changed in accordance with new and fresh data from experimental research. 3 kinds of process simulation are necessary. First, the VIZART software package is a balance model development used for calculating the material flow in technological processes. VIZART involves taking into account of equipment capacity, transport lines and storage volumes. Secondly, it is necessary to simulate the physico-chemical processes that are involved in the closure of NFC. The third kind of simulation is the development of software that allows the optimization, diagnostics and control of the processes which implies real-time simulation of product flows on the whole plant or on separate lines of the plant. (A.C.)

  1. Using FlowLab, an educational computational fluid dynamics tool, to perform a comparative study of turbulence models

    International Nuclear Information System (INIS)

    Parihar, A.; Kulkarni, A.; Stern, F.; Xing, T.; Moeykens, S.

    2005-01-01

    Flow over an Ahmed body is a key benchmark case for validating the complex turbulent flow field around vehicles. In spite of the simple geometry, the flow field around an Ahmed body retains critical features of real, external vehicular flow. The present study is an attempt to implement such a real life example into the course curriculum for undergraduate engineers. FlowLab, which is a Computational Fluid Dynamics (CFD) tool developed by Fluent Inc. for use in engineering education, allows students to conduct interactive application studies. This paper presents a synopsis of FlowLab, a description of one FlowLab exercise, and an overview of the educational experience gained by students through using FlowLab, which is understood through student surveys and examinations. FlowLab-based CFD exercises were implemented into 57:020 Mechanics of Fluids and Transport Processes and 58:160 Intermediate Mechanics of Fluids courses at the University of Iowa in the fall of 2004, although this report focuses only on experiences with the Ahmed body exercise, which was used only in the intermediate-level fluids class, 58:160. This exercise was developed under National Science Foundation funding by the authors of this paper. The focus of this study does not include validating the various turbulence models used for the Ahmed body simulation, because a two-dimensional simplification was applied. With the two-dimensional simplification, students may setup, run, and post process this model in a 50 minute class period using a single-CPU PC, as required for the 58:160 class at the University of Iowa. It is educational for students to understand the implication of a two- dimensional approximation for essentially a three-dimensional flow field, along with the consequent variation in both qualitative and quantitative results. Additionally, through this exercise, students may realize that the choice of the respective turbulence model will affect simulation prediction. (author)

  2. Eye Tracking Meets the Process of Process Modeling: a Visual Analytic Approach

    DEFF Research Database (Denmark)

    Burattin, Andrea; Kaiser, M.; Neurauter, Manuel

    2017-01-01

    Research on the process of process modeling (PPM) studies how process models are created. It typically uses the logs of the interactions with the modeling tool to assess the modeler’s behavior. In this paper we suggest to introduce an additional stream of data (i.e., eye tracking) to improve the ...

  3. Modeling biochemical transformation processes and information processing with Narrator.

    Science.gov (United States)

    Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner

    2007-03-27

    Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is

  4. The Formalization of the Business Process Modeling Goals

    Directory of Open Access Journals (Sweden)

    Ligita Bušinska

    2016-10-01

    Full Text Available In business process modeling the de facto standard BPMN has emerged. However, the applications of this notation have many subsets of elements and various extensions. Also, BPMN still coincides with many other modeling languages, forming a large set of available options for business process modeling languages and dialects. While, in general, the goal of modelers is a central notion in the choice of modeling languages and notations, in most researches that propose guidelines, techniques, and methods for business process modeling language evaluation and/or selection, the business process modeling goal is not formalized and not transparently taken into account. To overcome this gap, and to explicate and help to handle business process modeling complexity, the approach to formalize the business process modeling goal, and the supporting three dimensional business process modeling framework, are proposed.

  5. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  6. Kopernik : modeling business processes for digital customers

    OpenAIRE

    Estañol Lamarca, Montserrat; Castro, Manuel; Díaz-Montenegro, Sylvia; Teniente López, Ernest

    2016-01-01

    This paper presents the Kopernik methodology for modeling business processes for digital customers. These processes require a high degree of flexibility in the execution of their tasks or actions. We achieve this by using the artifact-centric approach to process modeling and the use of condition-action rules. The processes modeled following Kopernik can then be implemented in an existing commercial tool, Balandra.

  7. Specification of e-business process model for PayPal online payment process using Reo

    OpenAIRE

    Xie, M.

    2005-01-01

    textabstractE-business process modeling allows business analysts to better understand and analyze the business processes, and eventually to use software systems to automate (parts of) these business processes to achieve higher profit. To support e-business process modeling, many business process modeling languages have been used as tools. However, many existing business process modeling languages lack (a) formal semantics, (b) formal computational model, and (c) an integrated view of the busi...

  8. Risk analysis applied to the development of petroleum fields; Analise de risco aplicada ao desenvolvimento de campos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Ana Paula A. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Shiozer, Denis J. [Universidade Estadual de Campinas, SP (Brazil)

    2004-07-01

    Decision analysis applied to the development phase of petroleum fields must take into account the risk associated to several types of uncertainties. In the transition of the appraisal to the development phase, the importance of risk associated to the recovery factor may increase significantly. The process is complex due to high investments, large number of uncertain variables, strong dependence of the results with the production strategy definition. This complexity may, in several cases, cause difficulties to establish reliable techniques to assess risk correctly or it demands great computational effort. Therefore, methodologies to quantify the impact of uncertainties are still not well established because simplifications are necessary and the impact of such simplifications is not well known. The propose work bring the main aspects related to the validation of the simplifications necessary to the quantification of the impact of uncertainties in the risk analysis process. The adopted techniques are divided in three groups: adoption of the automated process and use of parallel computing; simplifications techniques in the treatment of attributes; and integration techniques of geological uncertainties with the different types of uncertainties (economical, technological and related with the production strategy). The integration of the geological uncertainties with the others uncertainties is made through the concept of representative models. The results show that the criteria adopted are good indicators of the viability of the methodology, improving the performance and reliability of the risk analysis process. (author)

  9. Process mining using BPMN: relating event logs and process models

    NARCIS (Netherlands)

    Kalenkova, A.A.; van der Aalst, W.M.P.; Lomazova, I.A.; Rubin, V.A.

    2017-01-01

    Process-aware information systems (PAIS) are systems relying on processes, which involve human and software resources to achieve concrete goals. There is a need to develop approaches for modeling, analysis, improvement and monitoring processes within PAIS. These approaches include process mining

  10. Process mining using BPMN : relating event logs and process models

    NARCIS (Netherlands)

    Kalenkova, A.A.; Aalst, van der W.M.P.; Lomazova, I.A.; Rubin, V.A.

    2015-01-01

    Process-aware information systems (PAIS) are systems relying on processes, which involve human and software resources to achieve concrete goals. There is a need to develop approaches for modeling, analysis, improvement and monitoring processes within PAIS. These approaches include process mining

  11. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  12. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  13. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  14. Neuroscientific Model of Motivational Process

    OpenAIRE

    Kim, Sung-il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Rewa...

  15. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  16. Modeling of ETL-Processes and Processed Information in Clinical Data Warehousing.

    Science.gov (United States)

    Tute, Erik; Steiner, Jochen

    2018-01-01

    Literature describes a big potential for reuse of clinical patient data. A clinical data warehouse (CDWH) is a means for that. To support management and maintenance of processes extracting, transforming and loading (ETL) data into CDWHs as well as to ease reuse of metadata between regular IT-management, CDWH and secondary data users by providing a modeling approach. Expert survey and literature review to find requirements and existing modeling techniques. An ETL-modeling-technique was developed extending existing modeling techniques. Evaluation by exemplarily modeling existing ETL-process and a second expert survey. Nine experts participated in the first survey. Literature review yielded 15 included publications. Six existing modeling techniques were identified. A modeling technique extending 3LGM2 and combining it with openEHR information models was developed and evaluated. Seven experts participated in the evaluation. The developed approach can help in management and maintenance of ETL-processes and could serve as interface between regular IT-management, CDWH and secondary data users.

  17. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    Science.gov (United States)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  18. Simulation Models of Human Decision-Making Processes

    Directory of Open Access Journals (Sweden)

    Nina RIZUN

    2014-10-01

    Full Text Available The main purpose of the paper is presentation of the new concept of human decision-making process modeling via using the analogy with Automatic Control Theory. From the author's point of view this concept allows to develop and improve the theory of decision-making in terms of the study and classification of specificity of the human intellectual processes in different conditions. It was proved that the main distinguishing feature between the Heuristic / Intuitive and Rational Decision-Making Models is the presence of so-called phenomenon of "enrichment" of the input information with human propensity, hobbies, tendencies, expectations, axioms and judgments, presumptions or bias and their justification. In order to obtain additional knowledge about the basic intellectual processes as well as the possibility of modeling the decision results in various parameters characterizing the decision-maker, the complex of the simulation models was developed. These models are based on the assumptions that:  basic intellectual processes of the Rational Decision-Making Model can be adequately simulated and identified by the transient processes of the proportional-integral-derivative controller; basic intellectual processes of the Bounded Rationality and Intuitive Models can be adequately simulated and identified by the transient processes of the nonlinear elements.The taxonomy of the most typical automatic control theory elements and their compliance with certain decision-making models with a point of view of decision-making process specificity and decision-maker behavior during a certain time of professional activity was obtained.

  19. User-guided discovery of declarative process models

    NARCIS (Netherlands)

    Maggi, F.M.; Mooij, A.J.; Aalst, van der W.M.P.; Chawla, N.; King, I.; Sperduti, A.

    2011-01-01

    Process mining techniques can be used to effectively discover process models from logs with example behaviour. Cross-correlating a discovered model with information in the log can be used to improve the underlying process. However, existing process discovery techniques have two important drawbacks.

  20. The NASA Ames Hypersonic Combustor-Model Inlet CFD Simulations and Experimental Comparisons

    Science.gov (United States)

    Venkatapathy, E.; Tokarcik-Polsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    Computations have been performed on a three-dimensional inlet associated with the NASA Ames combustor model for the hypersonic propulsion experiment in the 16-inch shock tunnel. The 3-dimensional inlet was designed to have the combustor inlet flow nearly two-dimensional and of sufficient mass flow necessary for combustion. The 16-inch shock tunnel experiment is a short duration test with test time of the order of milliseconds. The flow through the inlet is in chemical non-equilibrium. Two test entries have been completed and limited experimental results for the inlet region of the combustor-model are available. A number of CFD simulations, with various levels of simplifications such as 2-D simulations, 3-D simulations with and without chemical reactions, simulations with and without turbulent conditions, etc., have been performed. These simulations have helped determine the model inlet flow characteristics and the important factors that affect the combustor inlet flow and the sensitivity of the flow field to these simplifications. In the proposed paper, CFD modeling of the hypersonic inlet, results from the simulations and comparison with available experimental results will be presented.

  1. Deferred Action: Theoretical model of process architecture design for emergent business processes

    Directory of Open Access Journals (Sweden)

    Patel, N.V.

    2007-01-01

    Full Text Available E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed.

  2. Sustainability of prevention practices at the workplace: safety, simplification, productivity and effectiveness.

    Science.gov (United States)

    Messineo, A; Cattaruzza, M S; Prestigiacomo, C; Giordano, F; Marsella, L T

    2017-01-01

    Traditional full-time employment has evolved into various types of occupational situations, and, nowadays, new work organization strategies have been developed. Previously overlooked risk factors have emerged, such as traffic accidents while commuting or during work hours, poor work organization, and detrimental lifestyles (like alcohol and substance abuse, although recent statistics seem to show a declining trend for the latter). The global scenario shows greater attention to occupational risks, but also, to the reduced degree of protection. Moreover, the elevated costs, the unacceptably high fatal accident rates in some sectors, the complexity of the prevention systems, the lack of prevention training, the inadequate controls (despite the numerous independent supervisory bodies) and the obsolescence of certain precepts, call for a prompt review of the regulatory system. This is especially needed for general simplification, streamlining certification bodies and minimizing references to other provisions in the legislation that make it difficult for Italian and foreign workers to read and understand the rules "without legal interpreters". "New" occupational diseases and occupational risk factors have also been reported in addition to pollution. There are concerns for continued economic and social destabilization, unemployment, commuting, temporary and precarious contracts. All of these contribute to the lack of wellbeing in the working population. Thus, the timing, duration, and types of prevention training should be carefully assessed, making prevention more appealing by evaluating costs and benefits with a widespread use of indicators that make appropriate actions for health promotion "visible", thus encouraging awareness. Although reducing prevention is never justified, it should still be "sustainable" economically in order to avoid waste of resources. It is also essential to have laws which are easily and consistently interpreted and to work on the ethics of

  3. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  4. Regime-switching models to study psychological process

    NARCIS (Netherlands)

    Hamaker, E.L.; Grasman, R.P.P.P.; Kamphuis, J.H.

    2010-01-01

    Many psychological processes are characterized by recurrent shifts between different states. To model these processes at the level of the individual, regime-switching models may prove useful. In this chapter we discuss two of these models: the threshold autoregressive model and the Markov

  5. Modeling biochemical transformation processes and information processing with Narrator

    Directory of Open Access Journals (Sweden)

    Palfreyman Niall M

    2007-03-01

    Full Text Available Abstract Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs, which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a

  6. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    Science.gov (United States)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  7. Probabilistic modeling of discourse-aware sentence processing.

    Science.gov (United States)

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  8. The (Mathematical) Modeling Process in Biosciences.

    Science.gov (United States)

    Torres, Nestor V; Santos, Guido

    2015-01-01

    In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.

  9. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code

  10. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  11. Revisited global drift fluid model for linear devices

    International Nuclear Information System (INIS)

    Reiser, Dirk

    2012-01-01

    The problem of energy conserving global drift fluid simulations is revisited. It is found that for the case of cylindrical plasmas in a homogenous magnetic field, a straightforward reformulation is possible avoiding simplifications leading to energetic inconsistencies. The particular new feature is the rigorous treatment of the polarisation drift by a generalization of the vorticity equation. The resulting set of model equations contains previous formulations as limiting cases and is suitable for efficient numerical techniques. Examples of applications on studies of plasma blobs and its impact on plasma target interaction are presented. The numerical studies focus on the appearance of plasma blobs and intermittent transport and its consequences on the release of sputtered target materials in the plasma. Intermittent expulsion of particles in radial direction can be observed and it is found that although the neutrals released from the target show strong fluctuations in their propagation into the plasma column, the overall effect on time averaged profiles is negligible for the conditions considered. In addition, the numerical simulations are utilised to perform an a-posteriori assessment of the magnitude of energetic inconsistencies in previously used simplified models. It is found that certain popular approximations, in particular by the use of simplified vorticity equations, do not significantly affect energetics. However, popular model simplifications with respect to parallel advection are found to provide significant deterioration of the model consistency.

  12. A simple model for predicting soil temperature in snow-covered and seasonally frozen soil: model description and testing

    Directory of Open Access Journals (Sweden)

    K. Rankinen

    2004-01-01

    Full Text Available Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters: average soil thermal conductivity, specific heat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981–August 1990 were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R2-values of the testing period were between 0.87 and 0.94 at a depth of 20cm, and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means that the model is suitable for addition to catchment scale models. Keywords: soil temperature, snow model

  13. Metrics for Business Process Models

    Science.gov (United States)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  14. Revising process models through inductive learning

    NARCIS (Netherlands)

    Maggi, F.M.; Corapi, D.; Russo, A.; Lupu, E.; Visaggio, G.; Muehlen, zur M.; Su, J.

    2011-01-01

    Discovering the Business Process (BP) model underpinning existing practices through analysis of event logs, allows users to understand, analyse and modify the process. But, to be useful, the BP model must be kept in line with practice throughout its lifetime, as changes occur to the business

  15. Diagnosing differences between business process models

    NARCIS (Netherlands)

    Dijkman, R.M.; Dumas, M.; Reichert, M.; Shan, M.-C.

    2008-01-01

    This paper presents a technique to diagnose differences between business process models in the EPC notation. The diagnosis returns the exact position of a difference in the business process models and diagnoses the type of a difference, using a typology of differences developed in previous work.

  16. AMFIBIA: A Meta-Model for the Integration of Business Process Modelling Aspects

    DEFF Research Database (Denmark)

    Axenath, Björn; Kindler, Ekkart; Rubin, Vladimir

    2007-01-01

    AMFIBIA is a meta-model that formalises the essential aspects and concepts of business processes. Though AMFIBIA is not the first approach to formalising the aspects and concepts of business processes, it is more ambitious in the following respects: Firstly, it is independent from particular...... modelling formalisms of business processes and it is designed in such a way that any formalism for modelling some aspect of a business process can be plugged into AMFIBIA. Therefore, AMFIBIA is formalism-independent. Secondly, it is not biased toward any aspect of business processes; the different aspects...... can be considered and modelled independently of each other. Moreover, AMFIBIA is not restricted to a fixed set of aspects; new aspects of business processes can be easily integrated. Thirdly, AMFIBIA does not only name and relate the concepts of business process modelling, as it is typically done...

  17. Model based process-product design and analysis

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....

  18. Understanding Quality in Process Modelling: Towards a Holistic Perspective

    Directory of Open Access Journals (Sweden)

    Jan Recker

    2007-09-01

    Full Text Available Quality is one of the main topics in current conceptual modelling research, as is the field of business process modelling. Yet, widely acknowledged academic contributions towards an understanding or measurement of business process model quality are limited at best. In this paper I argue that the development of methodical theories concerning the measurement or establishment of process model quality must be preceded by methodological elaborations on business process modelling. I further argue that existing epistemological foundations of process modelling are insufficient for describing all extrinsic and intrinsic traits of model quality. This in turn has led to a lack of holistic understanding of process modelling. Taking into account the inherent social and purpose-oriented character of process modelling in contemporary organizations I present a socio-pragmatic constructionist methodology of business process modelling and sketch out implications of this perspective towards an understanding of process model quality. I anticipate that, based on this research, theories can be developed that facilitate the evaluation of the ’goodness’ of a business process model.

  19. Animated-simulation modeling facilitates clinical-process costing.

    Science.gov (United States)

    Zelman, W N; Glick, N D; Blackmore, C C

    2001-09-01

    Traditionally, the finance department has assumed responsibility for assessing process costs in healthcare organizations. To enhance process-improvement efforts, however, many healthcare providers need to include clinical staff in process cost analysis. Although clinical staff often use electronic spreadsheets to model the cost of specific processes, PC-based animated-simulation tools offer two major advantages over spreadsheets: they allow clinicians to interact more easily with the costing model so that it more closely represents the process being modeled, and they represent cost output as a cost range rather than as a single cost estimate, thereby providing more useful information for decision making.

  20. How visual cognition influences process model comprehension

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2017-01-01

    Process analysts and other professionals extensively use process models to analyze business processes and identify performance improvement opportunities. Therefore, it is important that such models can be easily and properly understood. Previous research has mainly focused on two types of factors

  1. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  2. Calculation of the MSD two-step process with the sudden approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Shiro [Tohoku Univ., Sendai (Japan). Dept. of Physics; Kawano, Toshihiko [Kyushu Univ., Advanced Energy Engineering Science, Kasuga, Fukuoka (Japan)

    2000-03-01

    A calculation of the two-step process with the sudden approximation is described. The Green's function which connects the one-step matrix element to the two-step one is represented in {gamma}-space to avoid the on-energy-shell approximation. Microscopically calculated two-step cross sections are averaged together with an appropriate level density to give a two-step cross section. The calculated cross sections are compared with the experimental data, however the calculation still contains several simplifications at this moment. (author)

  3. APROMORE : an advanced process model repository

    NARCIS (Netherlands)

    La Rosa, M.; Reijers, H.A.; Aalst, van der W.M.P.; Dijkman, R.M.; Mendling, J.; Dumas, M.; García-Bañuelos, L.

    2011-01-01

    Business process models are becoming available in large numbers due to their widespread use in many industrial applications such as enterprise and quality engineering projects. On the one hand, this raises a challenge as to their proper management: how can it be ensured that the proper process model

  4. The evolution of process-based hydrologic models

    NARCIS (Netherlands)

    Clark, Martyn P.; Bierkens, Marc F.P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R.N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-01-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this

  5. Software Process Validation: Quantitatively Measuring the Correspondence of a Process to a Model

    National Research Council Canada - National Science Library

    Cook, Jonathan E; Wolf, Alexander L

    1997-01-01

    .... When process models and process executions diverge, something significant is happening. The authors have developed techniques for uncovering and measuring the discrepancies between models and executions, which they call process validation...

  6. Distillation modeling for a uranium refining process

    Energy Technology Data Exchange (ETDEWEB)

    Westphal, B.R.

    1996-03-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed {open_quotes}cathode processing{close_quotes}. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process.

  7. Model Process Control Language

    Data.gov (United States)

    National Aeronautics and Space Administration — The MPC (Model Process Control) language enables the capture, communication and preservation of a simulation instance, with sufficient detail that it can be...

  8. Correctness-preserving configuration of business process models

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Dumas, M.; Gottschalk, F.; Hofstede, ter A.H.M.; La Rosa, M.; Mendling, J.; Fiadeiro, J.; Inverardi, P.

    2008-01-01

    Reference process models capture recurrent business operations in a given domain such as procurement or logistics. These models are intended to be configured to fit the requirements of specific organizations or projects, leading to individualized process models that are subsequently used for domain

  9. Process models and model-data fusion in dendroecology

    Directory of Open Access Journals (Sweden)

    Joel eGuiot

    2014-08-01

    Full Text Available Dendrochronology (i.e. the study of annually dated tree-ring time series has proved to be a powerful technique to understand tree-growth. This paper intends to show the interest of using ecophysiological modeling not only to understand and predict tree-growth (dendroecology but also to reconstruct past climates (dendroclimatology. Process models have been used for several decades in dendroclimatology, but it is only recently that methods of model-data fusion have led to significant progress in modeling tree-growth as a function of climate and in reconstructing past climates. These model-data fusion (MDF methods, mainly based on the Bayesian paradigm, have been shown to be powerful for both model calibration and model inversion. After a rapid survey of tree-growth modeling, we illustrate MDF with examples based on series of Southern France Aleppo pines and Central France oaks. These examples show that if plants experienced CO2 fertilization, this would have a significant effect on tree-growth which in turn would bias the climate reconstructions. This bias could be extended to other environmental non-climatic factors directly or indirectly affecting annual ring formation and not taken into account in classical empirical models, which supports the use of more complex process-based models. Finally, we conclude by showing the interest of the data assimilation methods applied in climatology to produce climate re-analyses.

  10. Catastrophe Insurance Modeled by Shot-Noise Processes

    Directory of Open Access Journals (Sweden)

    Thorsten Schmidt

    2014-02-01

    Full Text Available Shot-noise processes generalize compound Poisson processes in the following way: a jump (the shot is followed by a decline (noise. This constitutes a useful model for insurance claims in many circumstances; claims due to natural disasters or self-exciting processes exhibit similar features. We give a general account of shot-noise processes with time-inhomogeneous drivers inspired by recent results in credit risk. Moreover, we derive a number of useful results for modeling and pricing with shot-noise processes. Besides this, we obtain some highly tractable examples and constitute a useful modeling tool for dynamic claims processes. The results can in particular be used for pricing Catastrophe Bonds (CAT bonds, a traded risk-linked security. Additionally, current results regarding the estimation of shot-noise processes are reviewed.

  11. Modeling Aspects Of Activated Sludge Processes Part I: Process Modeling Of Activated Sludge Facilitation And Sedimentation

    International Nuclear Information System (INIS)

    Ibrahim, H. I.; EI-Ahwany, A.H.; Ibrahim, G.

    2004-01-01

    Process modeling of activated sludge flocculation and sedimentation reviews consider the activated sludge floc characteristics such as: morphology viable and non-viable cell ratio density and water content, bio flocculation and its kinetics were studied considering the characteristics of bio flocculation and explaining theory of Divalent Cation Bridging which describes the major role of cations in bio flocculation. Activated sludge flocculation process modeling was studied considering mass transfer limitations from Clifft and Andrew, 1981, Benefild and Molz 1983 passing Henze 1987, until Tyagi 1996 and G. Ibrahim et aI. 2002. Models of aggregation and breakage of flocs were studied by Spicer and Pratsinis 1996,and Biggs 2002 Size distribution of floes influences mass transfer and biomass separation in the activated sludge process. Therefore, it is of primary importance to establish the role of specific process operation factors, such as sludge loading dynamic sludge age and dissolved oxygen, on this distribution with special emphasis on the formation of primary particles

  12. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    Science.gov (United States)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  13. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  14. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  15. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    Probabilistic properties of Cox processes of relevance for statistical modelling and inference are studied. Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment properties...... and point process operations such as thinning, displacements, and superpositioning. We also discuss how to simulate specific Cox processes....

  16. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  17. Online Rule Generation Software Process Model

    OpenAIRE

    Sudeep Marwaha; Alka Aroa; Satma M C; Rajni Jain; R C Goyal

    2013-01-01

    For production systems like expert systems, a rule generation software can facilitate the faster deployment. The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation. The Royce’s final waterfall model has been used in this paper to explain the software development process. The paper presents the specific output of various steps of modified wat...

  18. Fermentation process diagnosis using a mathematical model

    Energy Technology Data Exchange (ETDEWEB)

    Yerushalmi, L; Volesky, B; Votruba, J

    1988-09-01

    Intriguing physiology of a solvent-producing strain of Clostridium acetobutylicum led to the synthesis of a mathematical model of the acetone-butanol fermentation process. The model presented is capable of describing the process dynamics and the culture behavior during a standard and a substandard acetone-butanol fermentation. In addition to the process kinetic parameters, the model includes the culture physiological parameters, such as the cellular membrane permeability and the number of membrane sites for active transport of sugar. Computer process simulation studies for different culture conditions used the model, and quantitatively pointed out the importance of selected culture parameters that characterize the cell membrane behaviour and play an important role in the control of solvent synthesis by the cell. The theoretical predictions by the new model were confirmed by experimental determination of the cellular membrane permeability.

  19. Processing used nuclear fuel with nanoscale control of uranium and ultrafiltration

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, Ernest M.; Peruski, Kathryn M.; Prizio, Sarah E. [Department of Civil and Environmental Engineering and Earth Sciences, University of Notre Dame, Notre Dame, IN 46556 (United States); Bridges, Andrea N.A.; Rudisill, Tracy S.; Hobbs, David T. [Savannah River National Laboratory, Aiken, SC 29808 (United States); Phillip, William A. [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, IN 46556 (United States); Burns, Peter C., E-mail: pburns@nd.edu [Department of Civil and Environmental Engineering and Earth Sciences, University of Notre Dame, Notre Dame, IN 46556 (United States); Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, IN 46556 (United States)

    2016-05-15

    Current separation and purification technologies utilized in the nuclear fuel cycle rely primarily on liquid–liquid extraction and ion-exchange processes. Here, we report a laboratory-scale aqueous process that demonstrates nanoscale control for the recovery of uranium from simulated used nuclear fuel (SIMFUEL). The selective, hydrogen peroxide induced oxidative dissolution of SIMFUEL material results in the rapid assembly of persistent uranyl peroxide nanocluster species that can be separated and recovered at moderate to high yield from other process-soluble constituents using sequestration-assisted ultrafiltration. Implementation of size-selective physical processes like filtration could results in an overall simplification of nuclear fuel cycle technology, improving the environmental consequences of nuclear energy and reducing costs of processing. - Highlights: • Nanoscale control in irradiated fuel reprocessing. • Ultrafiltration to recover uranyl cage clusters. • Alternative to solvent extraction for uranium purification.

  20. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  1. Deterministic geologic processes and stochastic modeling

    International Nuclear Information System (INIS)

    Rautman, C.A.; Flint, A.L.

    1992-01-01

    This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling

  2. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables...... be performed regarding the foreseeable output property y, and with respect to an admissible range of correcting actions for the parameters of the next stage. In this paper the basic principles of path modeling is presented. The mathematics is presented for processes having only one stage, having two stages...... of the next stage with the purpose of obtaining optimal or almost optimal quality of the output variable. An important aspect of the methods presented is the possibility of extensive graphic analysis of data that can provide the engineer with a detailed view of the multi-variate variation in data....

  3. Using Report, Mirroring, Alinging, Guidance Technique from NLP for Improving the Activity of the Organization

    Directory of Open Access Journals (Sweden)

    Iosif Cornel Marian

    2011-12-01

    Full Text Available The new technique allows the construction and use of situational models, the optimization of the information collected, and secondly the reducing of application time of the information obtained and their application in a given context. Using keywords and cognitive processes will lead to the identification of resistances in the communication process which will lead to a simplification of the communication process both in terms of content and the message.Keywords: optimization, efficient, personal situational model, support situational model, environmental situational model

  4. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  5. A sensitivity study of the thermomechanical far-field model of Yucca Mountain

    International Nuclear Information System (INIS)

    Brandshaug, T.

    1991-04-01

    A sensitivity study has been conducted investigating the predicted thermal and mechanical behavior of the far-field model of a proposed nuclear waste repository at Yucca Mountain. The model input parameters and phenomena that have been investigated include areal power density, thermal conductivity, specific heat capacity, material density, pore water boiling, stratigraphic and topographic simplifications Young's modulus, Poisson's ratio, coefficient of thermal expansion, in situ stress, rock matrix cohesion, rock matrix angle of internal friction, rock joint cohesion, and rock joint angle of internal friction. Using the range in values currently associated with these parameters, predictions were obtained for rock temperatures, stresses, matrix failure, and joint activity throughout the far-field model. Results show that the range considered for the areal power density has the most significant effect on the predicted rock temperatures. The range considered for the in situ stress has the most significant effect on the prediction of rock stresses and factors-of-safety for the matrix and joints. Predictions of matrix and joint factors-of-safety are also influenced significantly by the use of stratigraphic and topographic simplifications. 16 refs., 75 figs., 13 tabs

  6. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...

  7. [The dual process model of addiction. Towards an integrated model?].

    Science.gov (United States)

    Vandermeeren, R; Hebbrecht, M

    2012-01-01

    Neurobiology and cognitive psychology have provided us with a dual process model of addiction. According to this model, behavior is considered to be the dynamic result of a combination of automatic and controlling processes. In cases of addiction the balance between these two processes is severely disturbed. Automated processes will continue to produce impulses that ensure the continuance of addictive behavior. Weak, reflective or controlling processes are both the reason for and the result of the inability to forgo addiction. To identify features that are common to current neurocognitive insights into addiction and psychodynamic views on addiction. The picture that emerges from research is not clear. There is some evidence that attentional bias has a causal effect on addiction. There is no evidence that automatic associations have a causal effect, but there is some evidence that automatic action-tendencies do have a causal effect. Current neurocognitive views on the dual process model of addiction can be integrated with an evidence-based approach to addiction and with psychodynamic views on addiction.

  8. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    Science.gov (United States)

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  9. Regulatory simplification of fission product chemistry

    International Nuclear Information System (INIS)

    Read, J.B.J.; Soffer, L.

    1986-01-01

    The requirements for design provisions intended to limit fission product escape during reactor accidents have been based since 1962 upon a small number of simply-stated assumptions. These assumptions permeate current reactor regulation, but are too simple to deal with the complex processes that can reasonably be expected to occur during real accidents. Potential chemical processes of fission products in severe accidents are compared with existing plant safety features designed to minimize off-site consequences, and the possibility of a new set of simply-stated assumptions to replace the 1982 set is discussed

  10. Distillation modeling for a uranium refining process

    International Nuclear Information System (INIS)

    Westphal, B.R.

    1996-01-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed open-quotes cathode processingclose quotes. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process

  11. Specification of e-business process model for PayPal online payment process using Reo

    NARCIS (Netherlands)

    M. Xie

    2005-01-01

    textabstractE-business process modeling allows business analysts to better understand and analyze the business processes, and eventually to use software systems to automate (parts of) these business processes to achieve higher profit. To support e-business process modeling, many business process

  12. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    2010-01-01

    In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative haz...

  13. Dimensional modeling: beyond data processing constraints.

    Science.gov (United States)

    Bunardzic, A

    1995-01-01

    The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found

  14. Target-mediated drug disposition model and its approximations for antibody-drug conjugates.

    Science.gov (United States)

    Gibiansky, Leonid; Gibiansky, Ekaterina

    2014-02-01

    Antibody-drug conjugate (ADC) is a complex structure composed of an antibody linked to several molecules of a biologically active cytotoxic drug. The number of ADC compounds in clinical development now exceeds 30, with two of them already on the market. However, there is no rigorous mechanistic model that describes pharmacokinetic (PK) properties of these compounds. PK modeling of ADCs is even more complicated than that of other biologics as the model should describe distribution, binding, and elimination of antibodies with different toxin load, and also the deconjugation process and PK of the released toxin. This work extends the target-mediated drug disposition (TMDD) model to describe ADCs, derives the rapid binding (quasi-equilibrium), quasi-steady-state, and Michaelis-Menten approximations of the TMDD model as applied to ADCs, derives the TMDD model and its approximations for ADCs with load-independent properties, and discusses further simplifications of the system under various assumptions. The developed models are shown to describe data simulated from the available clinical population PK models of trastuzumab emtansine (T-DM1), one of the two currently approved ADCs. Identifiability of model parameters is also discussed and illustrated on the simulated T-DM1 examples.

  15. 3D City Models with Different Temporal Characteristica

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    traditional static city models and those models that are built for realtime applications. The difference between the city models applies both to the spatial modelling and also when using the phenomenon time in the models. If the city models are used in visualizations without any variation in time or when......-built dynamic or a model suitable for visualization in realtime, it is required that modelling is done with level-of-detail and simplification of both the aesthetics and the geometry. If a temporal characteristic is combined with a visual characteristic, the situation can easily be seen as a t/v matrix where t...... is the temporal characteristic or representation and v is the visual characteristic or representation....

  16. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  17. Neuroscientific Model of Motivational Process

    Science.gov (United States)

    Kim, Sung-il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment. PMID:23459598

  18. Neuroscientific model of motivational process.

    Science.gov (United States)

    Kim, Sung-Il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  19. Aerospace Materials Process Modelling

    Science.gov (United States)

    1988-08-01

    Cooling Transformation diagram ( CCT diagram ) When a IT diagram is used in the heat process modelling, we suppose that a sudden cooling (instantaneous...processes. CE, chooses instead to study thermo-mechanical properties referring to a CCT diagram . This is thinked to be more reliable to give a true...k , mm-_____sml l ml A I 1 III 12.4 This determination is however based on the following approximations: i) A CCT diagram is valid only for the

  20. Implementation of Push Recovery Strategy Using Triple Linear Inverted Pendulum Model in “T-FloW” Humanoid Robot

    Science.gov (United States)

    Dimas Pristovani, R.; Raden Sanggar, D.; Dadet, Pramadihanto.

    2018-04-01

    Push recovery is one of humanbehaviorwhich is a strategy to defend the body from anexternal force in any environment. This paper describes push recovery strategy which usesMIMO decoupled control system method. The dynamics system uses aquasi-dynamic system based on triple linear inverted pendulum model (TLIPM). The analysis of TLIPMuses zero moment point (ZMP) calculation from ZMP simplification in last research. By using this simplification of dynamics system, the control design can be simplified into 3 serial SISOwith known and uncertain disturbance models in each inverted pendulum. Each pendulum has different plan to damp the external force effect. In this experiment, PID controller (closed- loop)is used to arrange the damp characteristic.The experiment result shows thatwhen using push recovery control strategy (closed-loop control) is about 85.71% whilewithout using push recovery control strategy (open-loop control) it is about 28.57%.

  1. Advanced oxidation processes: overall models

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, M. [Univ. de los Andes, Escuela Basica de Ingenieria, La Hechicera, Merida (Venezuela); Curco, D.; Addardak, A.; Gimenez, J.; Esplugas, S. [Dept. de Ingenieria Quimica. Univ. de Barcelona, Barcelona (Spain)

    2003-07-01

    Modelling AOPs implies to consider all the steps included in the process, that means, mass transfer, kinetic (reaction) and luminic steps. In this way, recent works develop models which relate the global reaction rate to catalyst concentration and radiation absorption. However, the application of such models requires to know what is the controlling step for the overall process. In this paper, a simple method is explained which allows to determine the controlling step. Thus, it is assumed that reactor is divided in two hypothetical zones (dark and illuminated), and according to the experimental results, obtained by varying only the reaction volume, it can be decided if reaction occurs only in the illuminated zone or in the all reactor, including dark zone. The photocatalytic degradation of phenol, by using titania degussa P-25 as catalyst, is studied as reaction model. The preliminary results obtained are presented here, showing that it seems that, in this case, reaction only occurs in the illuminated zone of photoreactor. A model is developed to explain this behaviour. (orig.)

  2. Modeling Business Processes in Public Administration

    Science.gov (United States)

    Repa, Vaclav

    During more than 10 years of its existence business process modeling became a regular part of organization management practice. It is mostly regarded. as a part of information system development or even as a way to implement some supporting technology (for instance workflow system). Although I do not agree with such reduction of the real meaning of a business process, it is necessary to admit that information technologies play an essential role in business processes (see [1] for more information), Consequently, an information system is inseparable from a business process itself because it is a cornerstone of the general basic infrastructure of a business. This fact impacts on all dimensions of business process management. One of these dimensions is the methodology that postulates that the information systems development provide the business process management with exact methods and tools for modeling business processes. Also the methodology underlying the approach presented in this paper has its roots in the information systems development methodology.

  3. On the simplifications for the thermal modeling of tilting-pad journal bearings under thermoelastohydrodynamic regime

    DEFF Research Database (Denmark)

    Cerda Varela, Alejandro Javier; Fillon, Michel; Santos, Ilmar

    2012-01-01

    formulation for inclusion of the heat transfer effects between oil film and pad surface. Such simplified approach becomes necessary when modeling the behavior of tilting-pad journal bearings operating on controllable lubrication regime. Three different simplified heat transfer models are tested, by comparing...... are strongly dependent on the Reynolds number for the oil flow in the bearing. For bearings operating in laminar regime, the decoupling of the oil film energy equation solving procedure, with no heat transfer terms included, with the pad heat conduction problem, where the oil film temperature is applied......The relevance of calculating accurately the oil film temperature build up when modeling tilting-pad journal bearings is well established within the literature on the subject. This work studies the feasibility of using a thermal model for the tilting-pad journal bearing which includes a simplified...

  4. Deep geological isolation of nuclear waste: numerical modeling of repository scale hydrology

    International Nuclear Information System (INIS)

    Dettinger, M.D.

    1980-04-01

    The Scope of Work undertaken covers three main tasks, described as follows: (Task 1) CDM provided consulting services to the University on modeling aspects of the study having to do with transport processes involving the local groundwater system near the repository and the flow of fluids and vapors through the various porous media making up the repository system. (Task 2) CDM reviewed literature related to repository design, concentrating on effects of the repository geometry, location and other design factors on the flow of fluids within the repository boundaries, drainage from the repository structure, and the eventual transport of radionucldies away from the repository site. (Task 3) CDM, in a joint effort with LLL personnel, identified generic boundary and initial conditions, identified processes to be modeled, and recommended a modeling approach with suggestions for appropriate simplifications and approximations to the problem and identifiying important parameters necessary to model the processes. This report consists of two chapters and an appendix. The first chapter (Chapter III of the LLL report) presents a detailed description and discussion of the modeling approach developed in this project, its merits and weaknesses, and a brief review of the difficulties anticipated in implementing the approach. The second chapter (Chapter IV of the LLL report) presents a summary of a survey of researchers in the field of repository performance analysis and a discussion of that survey in light of the proposed modeling approach. The appendix is a review of the important physical processes involved in the potential hydrologic transport of radionuclides through, around and away from deep geologic nuclear waste repositories

  5. Steady state HNG combustion modeling

    Energy Technology Data Exchange (ETDEWEB)

    Louwers, J.; Gadiot, G.M.H.J.L. [TNO Prins Maurits Lab., Rijswijk (Netherlands); Brewster, M.Q. [Univ. of Illinois, Urbana, IL (United States); Son, S.F. [Los Alamos National Lab., NM (United States); Parr, T.; Hanson-Parr, D. [Naval Air Warfare Center, China Lake, CA (United States)

    1998-04-01

    Two simplified modeling approaches are used to model the combustion of Hydrazinium Nitroformate (HNF, N{sub 2}H{sub 5}-C(NO{sub 2}){sub 3}). The condensed phase is treated by high activation energy asymptotics. The gas phase is treated by two limit cases: the classical high activation energy, and the recently introduced low activation energy approach. This results in simplification of the gas phase energy equation, making an (approximate) analytical solution possible. The results of both models are compared with experimental results of HNF combustion. It is shown that the low activation energy approach yields better agreement with experimental observations (e.g. regression rate and temperature sensitivity), than the high activation energy approach.

  6. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  7. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2004-01-01

    This paper presents a process-group-contribution Method to model. simulate and synthesize a flowsheet. The process-group based representation of a flowsheet together with a process "property" model are presented. The process-group based synthesis method is developed on the basis of the computer...... aided molecular design methods and gives the ability to screen numerous process alternatives without the need to use the rigorous process simulation models. The process "property" model calculates the design targets for the generated flowsheet alternatives while a reverse modelling method (also...... developed) determines the design variables matching the target. A simple illustrative example highlighting the main features of the methodology is also presented....

  8. Software Engineering Laboratory (SEL) cleanroom process model

    Science.gov (United States)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  9. Business Model Process Configurations

    DEFF Research Database (Denmark)

    Taran, Yariv; Nielsen, Christian; Thomsen, Peter

    2015-01-01

    , by developing (inductively) an ontological classification framework, in view of the BM process configurations typology developed. Design/methodology/approach – Given the inconsistencies found in the business model studies (e.g. definitions, configurations, classifications) we adopted the analytical induction...

  10. Product/Process (P/P) Models For The Defense Waste Processing Facility (DWPF): Model Ranges And Validation Ranges For Future Processing

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-09-25

    Radioactive high level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-composition models form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository.

  11. Improving the process of process modelling by the use of domain process patterns

    NARCIS (Netherlands)

    Koschmider, A.; Reijers, H.A.

    2015-01-01

    The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process

  12. Research on Process-oriented Spatio-temporal Data Model

    Directory of Open Access Journals (Sweden)

    XUE Cunjin

    2016-02-01

    Full Text Available According to the analysis of the present status and existing problems of spatio-temporal data models developed in last 20 years,this paper proposes a process-oriented spatio-temporal data model (POSTDM,aiming at representing,organizing and storing continuity and gradual geographical entities. The dynamic geographical entities are graded and abstracted into process objects series from their intrinsic characteristics,which are process objects,process stage objects,process sequence objects and process state objects. The logical relationships among process entities are further studied and the structure of UML models and storage are also designed. In addition,through the mechanisms of continuity and gradual changes impliedly recorded by process objects,and the modes of their procedure interfaces offered by the customized ObjcetStorageTable,the POSTDM can carry out process representation,storage and dynamic analysis of continuity and gradual geographic entities. Taking a process organization and storage of marine data as an example,a prototype system (consisting of an object-relational database and a functional analysis platform is developed for validating and evaluating the model's practicability.

  13. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  14. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  15. Model feedstock supply processing plants

    Directory of Open Access Journals (Sweden)

    V. M. Bautin

    2013-01-01

    Full Text Available The model of raw providing the processing enterprises entering into vertically integrated structure on production and processing of dairy raw materials, differing by an orientation on achievement of cumulative effect by the integrated structure acting as criterion function which maximizing is reached by optimization of capacities, volumes of deliveries of raw materials and its qualitative characteristics, costs of industrial processing of raw materials and demand for dairy production is developed.

  16. Assessing healthcare process maturity: challenges of using a business process maturity model

    NARCIS (Netherlands)

    Tarhan, A.; Turetken, O.; van den Biggelaar, F.J.H.M.

    2015-01-01

    Doi: 10.4108/icst.pervasivehealth.2015.259105 The quality of healthcare services is influenced by the maturity of healthcare processes used to develop it. A maturity model is an instrument to assess and continually improve organizational processes. In the last decade, a number of maturity models

  17. Computer modeling of lung cancer diagnosis-to-treatment process.

    Science.gov (United States)

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  18. Utilización de la optimización multicriterio y redes neuronales como métodos para la mejora de la calidad de las piezas obtenidas por moldeo por inyección de termoplásticos

    Directory of Open Access Journals (Sweden)

    Miguel Ángel Sellés Cantó

    2012-05-01

    Full Text Available The thermoplastic injection molding manufacturing process is the most widely used plastic manufacturing process worldwide. In order to manufacture parts with the highest quality, a methodology that modelizes that process using a neural network has been developed. The average error obtained in the model simplification is negligible, then being tested by real tests. At the same time, the input variables that are most influential in the final quality of the piece are finally determined.

  19. Developing an ASD Macroeconomic Model of the Stock Approach - With Emphasis on Bank Lending and Interest Rates -

    OpenAIRE

    Yamaguchi, Yokei

    2017-01-01

    The financial crisis in 2008 evidenced an over-simplification of the role of banks made in the majority of macroeconomic models. Based on Accounting System Dynamics (ASD) modeling approach, the current research presents a model that incorporates banks as creators of deposits when making loans as opposed to the conventional view of banks as intermediaries of existing money. The generic model thus developed consists of five sectors; production, household, banking, government and central bank to...

  20. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-08-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  1. Modeling and simulation of heterogeneous catalytic processes

    CERN Document Server

    Dixon, Anthony

    2014-01-01

    Heterogeneous catalysis and mathematical modeling are essential components of the continuing search for better utilization of raw materials and energy, with reduced impact on the environment. Numerical modeling of chemical systems has progressed rapidly due to increases in computer power, and is used extensively for analysis, design and development of catalytic reactors and processes. This book presents reviews of the state-of-the-art in modeling of heterogeneous catalytic reactors and processes. Reviews by leading authorities in the respective areas Up-to-date reviews of latest techniques in modeling of catalytic processes Mix of US and European authors, as well as academic/industrial/research institute perspectives Connections between computation and experimental methods in some of the chapters.

  2. Synergy of modeling processes in the area of soft and hard modeling

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available High complexity of production processes results in more frequent use of computer systems for their modeling and simulation. Process modeling helps to find optimal solution, verify some assumptions before implementation and eliminate errors. In practice, modeling of production processes concerns two areas: hard modeling (based on differential equations of mathematical physics and soft (based on existing data. In the paper the possibility of synergistic connection of these two approaches was indicated: it means hard modeling support based on the tools used in soft modeling. It aims at significant reducing the time in order to obtain final results with the use of hard modeling. Some test were carried out in the Calibrate module of NovaFlow&Solid (NF&S simulation system in the frame of thermal analysis (ATAS-cup. The authors tested output values forecasting in NF&S system (solidification time on the basis of variable parameters of the thermal model (heat conduction, specific heat, density. Collected data was used as an input to prepare soft model with the use of MLP (Multi-Layer Perceptron neural network regression model. The approach described above enable to reduce the time of production process modeling with use of hard modeling and should encourage production companies to use it.

  3. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  4. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  5. Modeling process-structure-property relationships for additive manufacturing

    Science.gov (United States)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  6. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  7. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    Science.gov (United States)

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  8. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  9. The Role(s) of Process Models in Design Practice

    DEFF Research Database (Denmark)

    Iversen, Søren; Jensen, Mads Kunø Nyegaard; Vistisen, Peter

    2018-01-01

    This paper investigates how design process models are implemented and used in design-driven organisations. The archetypical theoretical framing of process models, describe their primary role as guiding the design process, and assign roles and deliverables throughout the process. We hypothesise...... that the process models also take more communicative roles in practice, both in terms of creating an internal design rationale, as well as demystifying the black box of design thinking to external stakeholders. We investigate this hypothesis through an interview study of four major danish design......-driven organisations, and analyse the different roles their archetypical process models take in their organisations. The main contribution is the identification of three, often overlapping roles, which design process models showed to assume in design-driven organisations: process guidance, adding transparency...

  10. Causally nonseparable processes admitting a causal model

    International Nuclear Information System (INIS)

    Feix, Adrien; Araújo, Mateus; Brukner, Caslav

    2016-01-01

    A recent framework of quantum theory with no global causal order predicts the existence of ‘causally nonseparable’ processes. Some of these processes produce correlations incompatible with any causal order (they violate so-called ‘causal inequalities’ analogous to Bell inequalities ) while others do not (they admit a ‘causal model’ analogous to a local model ). Here we show for the first time that bipartite causally nonseparable processes with a causal model exist, and give evidence that they have no clear physical interpretation. We also provide an algorithm to generate processes of this kind and show that they have nonzero measure in the set of all processes. We demonstrate the existence of processes which stop violating causal inequalities but are still causally nonseparable when mixed with a certain amount of ‘white noise’. This is reminiscent of the behavior of Werner states in the context of entanglement and nonlocality. Finally, we provide numerical evidence for the existence of causally nonseparable processes which have a causal model even when extended with an entangled state shared among the parties. (paper)

  11. Model-based analysis of digital radio frequency control systems for a heavy-ion synchrotron

    International Nuclear Information System (INIS)

    Spies, Christopher

    2013-12-01

    In this thesis, we investigate the behavior of different radio frequency control systems in a heavy-ion synchrotron, which act on the electrical fields used to accelerate charged particles, along with the longitudinal dynamics of the particles in the beam. Due to the large physical dimensions of the system, the required precision can only be achieved by a distributed control system. Since the plant is highly nonlinear and the overall system is very complex, a purely analytical treatment is not possible without introducing unacceptable simplifications. Instead, we use numerical simulation to investigate the system behavior. This thesis arises from a cooperation between the Institute of Microelectronic Systems at Technische Universitaet Darmstadt and the GSI Helmholtz Center for Heavy-Ion Research. A new heavy-ion synchrotron, the SIS100, is currently being built at GSI; its completion is scheduled for 2016. The starting point for the present thesis was the question whether a control concept previously devised at GSI is feasible - not only in the ideal case, but in the presence of parameter deviations, noise, and other disturbances - and how it can be optimized. In this thesis, we present a system model of a heavy-ion synchrotron. This model comprises the beam dynamics, the relevant components of the accelerator, and the relevant controllers as well as the communication between those controllers. We discuss the simulation techniques as well as several simplifications we applied in order to be able to simulate the model in an acceptable amount of time and show that these simplifications are justified. Using the model, we conducted several case studies in order to demonstrate the practical feasibility of the control concept, analyze the system's sensitivity towards disturbances and explore opportunities for future extensions. We derive specific suggestions for improvements from our results. Finally, we demonstrate that the model represents the physical reality

  12. MTpy - Python Tools for Magnetotelluric Data Processing and Analysis

    Science.gov (United States)

    Krieger, Lars; Peacock, Jared; Thiel, Stephan; Inverarity, Kent; Kirkby, Alison; Robertson, Kate; Soeffky, Paul; Didana, Yohannes

    2014-05-01

    We present the Python package MTpy, which provides functions for the processing, analysis, and handling of magnetotelluric (MT) data sets. MT is a relatively immature and not widely applied geophysical method in comparison to other geophysical techniques such as seismology. As a result, the data processing within the academic MT community is not thoroughly standardised and is often based on a loose collection of software, adapted to the respective local specifications. We have developed MTpy to overcome problems that arise from missing standards, and to provide a simplification of the general handling of MT data. MTpy is written in Python, and the open-source code is freely available from a GitHub repository. The setup follows the modular approach of successful geoscience software packages such as GMT or Obspy. It contains sub-packages and modules for the various tasks within the standard work-flow of MT data processing and interpretation. In order to allow the inclusion of already existing and well established software, MTpy does not only provide pure Python classes and functions, but also wrapping command-line scripts to run standalone tools, e.g. modelling and inversion codes. Our aim is to provide a flexible framework, which is open for future dynamic extensions. MTpy has the potential to promote the standardisation of processing procedures and at same time be a versatile supplement for existing algorithms. Here, we introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing, interpretation, and visualisation utilising MTpy on example data sets collected over different regions of Australia and the USA.

  13. A revised multi-Fickian moisture transport model to describe non-Fickian effects in wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund; Damkilde, Lars; Svensson, Staffan

    2007-01-01

    This paper presents a study and a refinement of the sorption rate model in a so-called multi-Fickian or multi-phase model. This type of model describes the complex moisture transport system in wood, which consists of separate water vapor and bound-water diffusion interacting through sorption...... sorption allow a simplification of the system to be modeled by a single Fickian diffusion equation. To determine the response of the system, the sorption rate model is essential. Here the function modeling the moisture-dependent adsorption rate is investigated based on existing experiments on thin wood...

  14. Neuroscientific Model of Motivational Process

    Directory of Open Access Journals (Sweden)

    Sung-Il eKim

    2013-03-01

    Full Text Available Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three subprocesses, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous subprocesses, namely reward-driven approach, value-based decision making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area and the dorsolateral prefrontal cortex (cognitive control area are the main neural circuits related to regulation of motivation. These three subprocesses interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  15. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  16. Model of diffusers / permeators for hydrogen processing

    International Nuclear Information System (INIS)

    Jacobs, W. D.; Hang, T.

    2008-01-01

    Palladium-silver (Pd-Ag) diffusers are mainstays of hydrogen processing. Diffusers separate hydrogen from inert species such as nitrogen, argon or helium. The tubing becomes permeable to hydrogen when heated to more than 250 C and a differential pressure is created across the membrane. The hydrogen diffuses better at higher temperatures. Experimental or experiential results have been the basis for determining or predicting a diffuser's performance. However, the process can be mathematically modeled, and comparison to experimental or other operating data can be utilized to improve the fit of the model. A reliable model-based diffuser system design is the goal which will have impacts on tritium and hydrogen processing. A computer model has been developed to solve the differential equations for diffusion given the operating boundary conditions. The model was compared to operating data for a low pressure diffuser system. The modeling approach and the results are presented in this paper. (authors)

  17. Models of transport processes in concrete

    International Nuclear Information System (INIS)

    Pommersheim, J.M.; Clifton, J.R.

    1991-01-01

    An approach being considered by the US Nuclear Regulatory Commission for disposal of low-level radioactive waste is to place the waste forms in concrete vaults buried underground. The vaults would need a service life of 500 years. Approaches for predicting the service life of concrete of such vaults include the use of mathematical models. Mathematical models are presented in this report for the major degradation processes anticipated for the concrete vaults, which are corrosion of steel reinforcement, sulfate attack, acid attack, and leaching. The models mathematically represent rate controlling processes including diffusion, convection, and reaction and sorption of chemical species. These models can form the basis for predicting the life of concrete under in-service conditions. 33 refs., 6 figs., 7 tabs

  18. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  19. Beyond dual-process models: A categorisation of processes underlying intuitive judgement and decision making

    NARCIS (Netherlands)

    Glöckner, A.; Witteman, C.L.M.

    2010-01-01

    Intuitive-automatic processes are crucial for making judgements and decisions. The fascinating complexity of these processes has attracted many decision researchers, prompting them to start investigating intuition empirically and to develop numerous models. Dual-process models assume a clear

  20. Modelling of transport and biogeochemical processes in pollution plumes: Literature review of model development

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard

    2002-01-01

    A literature survey shows how biogeochemical (coupled organic and inorganic reaction processes) transport models are based on considering the complete biodegradation process as either a single- or as a two-step process. It is demonstrated that some two-step process models rely on the Partial...... Equilibrium Approach (PEA). The PEA assumes the organic degradation step, and not the electron acceptor consumption step, is rate limiting. This distinction is not possible in one-step process models, where consumption of both the electron donor and acceptor are treated kinetically. A three-dimensional, two......-step PEA model is developed. The model allows for Monod kinetics and biomass growth, features usually included only in one-step process models. The biogeochemical part of the model is tested for a batch system with degradation of organic matter under the consumption of a sequence of electron acceptors...

  1. Dual processing model of medical decision-making

    Science.gov (United States)

    2012-01-01

    Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical

  2. Dual processing model of medical decision-making.

    Science.gov (United States)

    Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G

    2012-09-03

    Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the

  3. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  4. Mathematical modeling of biomass fuels formation process

    International Nuclear Information System (INIS)

    Gaska, Krzysztof; Wandrasz, Andrzej J.

    2008-01-01

    The increasing demand for thermal and electric energy in many branches of industry and municipal management accounts for a drastic diminishing of natural resources (fossil fuels). Meanwhile, in numerous technical processes, a huge mass of wastes is produced. A segregated and converted combustible fraction of the wastes, with relatively high calorific value, may be used as a component of formed fuels. The utilization of the formed fuel components from segregated groups of waste in associated processes of co-combustion with conventional fuels causes significant savings resulting from partial replacement of fossil fuels, and reduction of environmental pollution resulting directly from the limitation of waste migration to the environment (soil, atmospheric air, surface and underground water). The realization of technological processes with the utilization of formed fuel in associated thermal systems should be qualified by technical criteria, which means that elementary processes as well as factors of sustainable development, from a global viewpoint, must not be disturbed. The utilization of post-process waste should be preceded by detailed technical, ecological and economic analyses. In order to optimize the mixing process of fuel components, a mathematical model of the forming process was created. The model is defined as a group of data structures which uniquely identify a real process and conversion of this data in algorithms based on a problem of linear programming. The paper also presents the optimization of parameters in the process of forming fuels using a modified simplex algorithm with a polynomial worktime. This model is a datum-point in the numerical modeling of real processes, allowing a precise determination of the optimal elementary composition of formed fuels components, with assumed constraints and decision variables of the task

  5. Process Model for Friction Stir Welding

    Science.gov (United States)

    Adams, Glynn

    1996-01-01

    Friction stir welding (FSW) is a relatively new process being applied for joining of metal alloys. The process was initially developed by The Welding Institute (TWI) in Cambridge, UK. The FSW process is being investigated at NASA/MSEC as a repair/initial weld procedure for fabrication of the super-light-weight aluminum-lithium shuttle external tank. The FSW investigations at MSFC were conducted on a horizontal mill to produce butt welds of flat plate material. The weldment plates are butted together and fixed to a backing plate on the mill bed. A pin tool is placed into the tool holder of the mill spindle and rotated at approximately 400 rpm. The pin tool is then plunged into the plates such that the center of the probe lies at, one end of the line of contact, between the plates and the shoulder of the pin tool penetrates the top surface of the weldment. The weld is produced by traversing the tool along the line of contact between the plates. A lead angle allows the leading edge of the shoulder to remain above the top surface of the plate. The work presented here is the first attempt at modeling a complex phenomenon. The mechanical aspects of conducting the weld process are easily defined and the process itself is controlled by relatively few input parameters. However, in the region of the weld, plasticizing and forging of the parent material occurs. These are difficult processes to model. The model presented here addresses only variations in the radial dimension outward from the pin tool axis. Examinations of the grain structure of the weld reveal that a considerable amount of material deformation also occurs in the direction parallel to the pin tool axis of rotation, through the material thickness. In addition, measurements of the axial load on the pin tool demonstrate that the forging affect of the pin tool shoulder is an important process phenomenon. Therefore, the model needs to be expanded to account for the deformations through the material thickness and the

  6. Towards a structured process modeling method: Building the prescriptive modeling theory information on submission

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Gailly, F.; Grefen, P.W.P.J.; Poels, G.

    2017-01-01

    In their effort to control and manage processes, organizations often create process models. The quality of such models is not always optimal, because it is challenging for a modeler to translate her mental image of the process into a formal process description. In order to support this complex human

  7. Simplified modeling of liquid-liquid heat exchangers for use in control systems

    International Nuclear Information System (INIS)

    Laszczyk, Piotr

    2017-01-01

    For last decades various models of heat exchange processes have been developed to capture their specific dynamic nature. These models have different degrees of complexity depending on modeling assumptions and simplifications. Complexity of mathematical model can be very critical when the model is to be a basis for deriving the control law because it directly affects the complexity of mathematical transformations and complexity of final control algorithm. In this paper, the simplified cross convection model for wide class of heat exchangers is suggested. Apart from very few reports so far, the properties of this modeling approach have never been investigated in detail. The concept for this model is derived from the fundamental principle of energy conservation and combined with a simple dynamical approximation in the form of ordinary differential equations. Within this framework, the simplified tuning procedure of the proposed model is suggested and verified for plate and spiral tube heat exchangers based on experimental data. The dynamical properties and stability of the suggested model are addressed and sensitivity analysis is also presented. It is shown that such a modeling approach preserves high modeling accuracy at very low numerical complexity. The validation results show that the suggested modeling and tuning method is useful for practical applications.

  8. The Model of the Production Process for the Quality Management

    Directory of Open Access Journals (Sweden)

    Alot Zbigniew

    2017-02-01

    Full Text Available This article is a result of the research on the models of the production processes for the quality management and their identification. It discusses the classical model and the indicators for evaluating the capabilities by taking as its starting point the assumption of the normal distribution of the process characteristics. The division of the process types proposed by ISO 21747:2006 standard introducing models for non-stationary processes is presented. A general process model that allows in any real case to precisely describe the statistical characteristics of the process is proposed. It gives the opportunity for more detailed description, in comparison to the model proposed by ISO 21747:2006 standard, of the process characteristics and determining its capability. This model contains the type of process, statistical distribution, and the method for determining the capability and performance (long-term capability of the process. One of the model elements is proposed, own classification and resulting set of process types. The classification follows the recommendations of ISO 21747:2006 introducing models for the non-stationary processes. However, the set of the process types allows, beyond a more precise description of the process characteristics, its usage to monitor the process.

  9. Aspect-Oriented Business Process Modeling with AO4BPMN

    Science.gov (United States)

    Charfi, Anis; Müller, Heiko; Mezini, Mira

    Many crosscutting concerns in business processes need to be addressed already at the business process modeling level such as compliance, auditing, billing, and separation of duties. However, existing business process modeling languages including OMG's Business Process Modeling Notation (BPMN) lack appropriate means for expressing such concerns in a modular way. In this paper, we motivate the need for aspect-oriented concepts in business process modeling languages and propose an aspect-oriented extension to BPMN called AO4BPMN. We also present a graphical editor supporting that extension.

  10. Dual processing model of medical decision-making

    OpenAIRE

    Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G

    2012-01-01

    Abstract Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administe...

  11. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  12. Ontological Analysis of Integrated Process Models: testing hypotheses

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    2001-11-01

    Full Text Available Integrated process modeling is achieving prominence in helping to document and manage business administration and IT processes in organizations. The ARIS framework is a popular example for a framework of integrated process modeling not least because it underlies the 800 or more reference models embedded in the world's most popular ERP package, SAP R/3. This paper demonstrates the usefulness of the Bunge-Wand-Weber (BWW representation model for evaluating modeling grammars such as those constituting ARIS. It reports some initial insights gained from pilot testing Green and Rosemann's (2000 evaluative propositions. Even when considering all five views of ARIS, modelers have problems representing business rules, the scope and boundary of systems, and decomposing models. However, even though it is completely ontologically redundant, users still find the function view useful in modeling.

  13. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...

  14. Guided interaction exploration in artifact-centric process models

    NARCIS (Netherlands)

    van Eck, M.L.; Sidorova, N.; van der Aalst, W.M.P.

    2017-01-01

    Artifact-centric process models aim to describe complex processes as a collection of interacting artifacts. Recent development in process mining allow for the discovery of such models. However, the focus is often on the representation of the individual artifacts rather than their interactions. Based

  15. BUSINESS PROCESS MODELLING: A FOUNDATION FOR KNOWLEDGE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vesna Bosilj-Vukšić

    2006-12-01

    Full Text Available Knowledge management (KM is increasingly recognised as a strategic practice of knowledge-intensive companies, becoming an integral part of an organisation's strategy to improve business performance. This paper provides an overview of business process modelling applications and analyses the relationship between business process modelling and knowledge management projects. It presents the case study of Croatian leading banks and the insurance company, discussing its practical experience in conducting business process modelling projects and investigating the opportunity for integrating business process repository and organisational knowledge as the foundation for knowledge management system development.

  16. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-08-01

    Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model. Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used. Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase. Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  17. A linear model of population dynamics

    Science.gov (United States)

    Lushnikov, A. A.; Kagan, A. I.

    2016-08-01

    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  18. Agrochemical fate models applied in agricultural areas from Colombia

    Science.gov (United States)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  19. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  20. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  1. Kinetic and thermodynamic modelling of TBP synthesis processes

    International Nuclear Information System (INIS)

    Azzouz, A.; Attou, M.

    1989-02-01

    The present paper deals with kinetic and thermodynamic modellisation of tributylphosphate (TBP) synthesis processes. Its aim consists in a purely comparative study of two different synthesis ways i.e. direct and indirect estirification of butanol. The methodology involves two steps. The first step consists in approximating curves which describe the process evolution and their dependence on the main parameters. The results gave a kinetic model of the process rate yielding in TBP. Further, on the basis of thermodynamic data concerning the various involved compounds a theoretical model was achieved. The calculations were carried out in Basic language and an interpolation mathematical method was applied to approximate the kinetic curves. The thermodynamic calculations were achieved on the basis of GIBBS' free energy using a VAX type computer and a VT240 terminal. The calculations accuracy was reasonable and within the norms. For each process, the confrontation of both models leads to an appreciable accord. In the two processes, the thermodynamic models were similar although the kinetic equations present different reaction orders. Hence the reaction orders were determined by a mathematical method which conists in searching the minimal difference between an empiric relation and a kinetic model with fixed order. This corresponds in fact in testing the model proposed at various reaction order around the suspected value. The main idea which results from such a work is that this kind of processes is well fitting with the model without taking into account the side chain reactions. The process behaviour is like that of a single reaction having a quasi linear dependence of the rate yielding and the reaction time for both processes

  2. On the correlation between process model metrics and errors

    NARCIS (Netherlands)

    Mendling, J.; Neumann, G.; Aalst, van der W.M.P.; Grundy, J.; Hartmann, S.; Laender, S.; Maciaszek, L.; Roddick, J.F.

    2007-01-01

    Business process models play an important role for the management, design, and improvement of process organizations and process-aware information systems. Despite the extensive application of process modeling in practice there are hardly empirical results available on quality aspects of process

  3. On the Numerical Modeling of Confined Masonry Structures for In-plane Earthquake Loads

    Directory of Open Access Journals (Sweden)

    Mircea Barnaure

    2015-07-01

    Full Text Available The seismic design of confined masonry structures involves the use of numerical models. As there are many parameters that influence the structural behavior, these models can be very complex and unsuitable for the current design purposes of practicing engineers. Simplified models could lead to reasonably accurate results, but caution should be given to the simplification assumptions. An analysis of various parameters considered in the numerical modeling of confined masonry structural walls is made. Conclusions regarding the influence of simplified procedures on the results are drawn.

  4. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  5. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen

    2013-01-01

    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  6. A Companion Model Approach to Modelling and Simulation of Industrial Processes

    International Nuclear Information System (INIS)

    Juslin, K.

    2005-09-01

    Modelling and simulation provides for huge possibilities if broadly taken up by engineers as a working method. However, when considering the launching of modelling and simulation tools in an engineering design project, they shall be easy to learn and use. Then, there is no time to write equations, to consult suppliers' experts, or to manually transfer data from one tool to another. The answer seems to be in the integration of easy to use and dependable simulation software with engineering tools. Accordingly, the modelling and simulation software shall accept as input such structured design information on industrial unit processes and their connections, as provided for by e.g. CAD software and product databases. The software technology, including required specification and communication standards, is already available. Internet based service repositories make it possible for equipment manufacturers to supply 'extended products', including such design data as needed by engineers engaged in process and automation integration. There is a market niche evolving for simulation service centres, operating in co-operation with project consultants, equipment manufacturers, process integrators, automation designers, plant operating personnel, and maintenance centres. The companion model approach for specification and solution of process simulation models, as presented herein, is developed from the above premises. The focus is on how to tackle real world processes, which from the modelling point of view are heterogeneous, dynamic, very stiff, very nonlinear and only piece vice continuous, without extensive manual interventions of human experts. An additional challenge, to solve the arising equations fast and reliable, is dealt with, as well. (orig.)

  7. From BPMN process models to BPEL web services

    NARCIS (Netherlands)

    Ouyang, C.; Aalst, van der W.M.P.; Dumas, M.; Hofstede, ter A.H.M.; Feig, E.; Kumar, A.

    2006-01-01

    The Business Process Modelling Notation (BPMN) is a graph-oriented language in which control and action nodes can be connected almost arbitrarily. It is supported by various modelling tools but so far no systems can directly execute BPMN models. The Business Process Execution Language for Web

  8. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  9. FY 2000 study report on the study on technological development of the chemical processes of the next generation; 2000 nendo jisedai kagaku process gijutsu kaihatsu ni kansuru chosa kenkyu hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The technological development of the innovative chemical reaction processes is studied, in order to accomplish further energy saving, and reduction of resource consumption and environmental loads. Described herein are the FY 2000 study results. The program for systematization of the next-generation chemical processes systematically pigeonholes the undergoing projects and subjects to be studied, based on the principles of simplification, and sets the study fields of organic bulk chemicals, organic fine chemicals, highpolymer materials and inorganic materials. The program for investigation on next-generation chemical processes reviews creation and technological use of tailor-made biocatalysts, polymer materials which utilize wood resources, tailor-made reaction process engineering for handling fine particles in high-temperature reaction fields, production and processing of materials for high-performance polymer batteries, and extreme energy saving process for polyolefins, and proposes the revisions. The newly proposed study themes include novel C1 catalytic processes toward minimal wastes, and high utilization of biotechnology for novel processes to create materials. (NEDO)

  10. QUALITY IMPROVEMENT MODEL AT THE MANUFACTURING PROCESS PREPARATION LEVEL

    Directory of Open Access Journals (Sweden)

    Dusko Pavletic

    2009-12-01

    Full Text Available The paper expresses base for an operational quality improvement model at the manufacturing process preparation level. A numerous appropriate related quality assurance and improvement methods and tools are identified. Main manufacturing process principles are investigated in order to scrutinize one general model of manufacturing process and to define a manufacturing process preparation level. Development and introduction of the operational quality improvement model is based on a research conducted and results of methods and tools application possibilities in real manufacturing processes shipbuilding and automotive industry. Basic model structure is described and presented by appropriate general algorithm. Operational quality improvement model developed lays down main guidelines for practical and systematic application of quality improvements methods and tools.

  11. Models of care and delivery

    DEFF Research Database (Denmark)

    Lundgren, Jens

    2014-01-01

    with community clinics for injecting drug-dependent persons is also being implemented. Shared care models require oversight to ensure that primary responsibility is defined for the persons overall health situation, for screening of co-morbidities, defining indication to treat comorbidities, prescription of non......Marked regional differences in HIV-related clinical outcomes exist across Europe. Models of outpatient HIV care, including HIV testing, linkage and retention for positive persons, also differ across the continent, including examples of sub-optimal care. Even in settings with reasonably good...... outcomes, existing models are scrutinized for simplification and/or reduced cost. Outpatient HIV care models across Europe may be centralized to specialized clinics only, primarily handled by general practitioners (GP), or a mixture of the two, depending on the setting. Key factors explaining...

  12. Residence time modeling of hot melt extrusion processes.

    Science.gov (United States)

    Reitz, Elena; Podhaisky, Helmut; Ely, David; Thommes, Markus

    2013-11-01

    The hot melt extrusion process is a widespread technique to mix viscous melts. The residence time of material in the process frequently determines the product properties. An experimental setup and a corresponding mathematical model were developed to evaluate residence time and residence time distribution in twin screw extrusion processes. The extrusion process was modeled as the convolution of a mass transport process described by a Gaussian probability function, and a mixing process represented by an exponential function. The residence time of the extrusion process was determined by introducing a tracer at the extruder inlet and measuring the tracer concentration at the die. These concentrations were fitted to the residence time model, and an adequate correlation was found. Different parameters were derived to characterize the extrusion process including the dead time, the apparent mixing volume, and a transport related axial mixing. A 2(3) design of experiments was performed to evaluate the effect of powder feed rate, screw speed, and melt viscosity of the material on the residence time. All three parameters affect the residence time of material in the extruder. In conclusion, a residence time model was developed to interpret experimental data and to get insights into the hot melt extrusion process. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Quasilinear Extreme Learning Machine Model Based Internal Model Control for Nonlinear Process

    Directory of Open Access Journals (Sweden)

    Dazi Li

    2015-01-01

    Full Text Available A new strategy for internal model control (IMC is proposed using a regression algorithm of quasilinear model with extreme learning machine (QL-ELM. Aimed at the chemical process with nonlinearity, the learning process of the internal model and inverse model is derived. The proposed QL-ELM is constructed as a linear ARX model with a complicated nonlinear coefficient. It shows some good approximation ability and fast convergence. The complicated coefficients are separated into two parts. The linear part is determined by recursive least square (RLS, while the nonlinear part is identified through extreme learning machine. The parameters of linear part and the output weights of ELM are estimated iteratively. The proposed internal model control is applied to CSTR process. The effectiveness and accuracy of the proposed method are extensively verified through numerical results.

  14. Mathematical model of seed germination process

    International Nuclear Information System (INIS)

    Gładyszewska, B.; Koper, R.; Kornarzyński, K.

    1999-01-01

    An analytical model of seed germination process was described. The model based on proposed working hypothesis leads - by analogy - to a law corresponding with Verhulst-Pearl's law, known from the theory of population kinetics. The model was applied to describe the germination kinetics of tomato seeds, Promyk field cultivar, biostimulated by laser treatment. Close agreement of experimental and model data was obtained [pl

  15. A linear time layout algorithm for business process models

    NARCIS (Netherlands)

    Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.

    2014-01-01

    The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is

  16. Towards Model Checking Stochastic Process Algebra

    NARCIS (Netherlands)

    Hermanns, H.; Grieskamp, W.; Santen, T.; Katoen, Joost P.; Stoddart, B.; Meyer-Kayser, J.; Siegle, M.

    2000-01-01

    Stochastic process algebras have been proven useful because they allow behaviour-oriented performance and reliability modelling. As opposed to traditional performance modelling techniques, the behaviour- oriented style supports composition and abstraction in a natural way. However, analysis of

  17. A Comprehensive and Harmonized Digital Forensic Investigation Process Model.

    Science.gov (United States)

    Valjarevic, Aleksandar; Venter, Hein S

    2015-11-01

    Performing a digital forensic investigation (DFI) requires a standardized and formalized process. There is currently neither an international standard nor does a global, harmonized DFI process (DFIP) exist. The authors studied existing state-of-the-art DFIP models and concluded that there are significant disparities pertaining to the number of processes, the scope, the hierarchical levels, and concepts applied. This paper proposes a comprehensive model that harmonizes existing models. An effort was made to incorporate all types of processes proposed by the existing models, including those aimed at achieving digital forensic readiness. The authors introduce a novel class of processes called concurrent processes. This is a novel contribution that should, together with the rest of the model, enable more efficient and effective DFI, while ensuring admissibility of digital evidence. Ultimately, the proposed model is intended to be used for different types of DFI and should lead to standardization. © 2015 American Academy of Forensic Sciences.

  18. Ontological Model of Business Process Management Systems

    Science.gov (United States)

    Manoilov, G.; Deliiska, B.

    2008-10-01

    The activities which constitute business process management (BPM) can be grouped into five categories: design, modeling, execution, monitoring and optimization. Dedicated software packets for business process management system (BPMS) are available on the market. But the efficiency of its exploitation depends on used ontological model in the development time and run time of the system. In the article an ontological model of BPMS in area of software industry is investigated. The model building is preceded by conceptualization of the domain and taxonomy of BPMS development. On the base of the taxonomy an simple online thesaurus is created.

  19. Dual processing model of medical decision-making

    Directory of Open Access Journals (Sweden)

    Djulbegovic Benjamin

    2012-09-01

    Full Text Available Abstract Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I and/or an analytical, deliberative (system II processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to

  20. Workflow Patterns for Business Process Modeling

    NARCIS (Netherlands)

    Thom, Lucineia Heloisa; Lochpe, Cirano; Reichert, M.U.

    For its reuse advantages, workflow patterns (e.g., control flow patterns, data patterns, resource patterns) are increasingly attracting the interest of both researchers and vendors. Frequently, business process or workflow models can be assembeled out of a set of recurrent process fragments (or

  1. [The contradictive tendencies in medical treatment of the Hellenistic age--diversity versus simplification, chronic extension (physical therapy) versus rapidity, humane medicine versus worldly success].

    Science.gov (United States)

    Che, Jayoung

    2008-06-01

    empiricism in reality tended to expedite simplification of treatment. This tendency of simplification of the latter corresponded to the contemporary need of society, that is, speedy and effective treatment for the wounded in war or for epidemic in the army, farms of collective labour or much crowded cities. The bigger the groups were, the more the methods of treatment got simplified, individual conditions not much accounted. Then, the empiricism came to be united with anatomy, as the anatomy, being much developed in the process of curing the wounded in war, goes with simplification of medical treatment in the hospital of large scale. It can be said that the origin of simplified definition of diseases goes back far to the school of Knidos. On the other hand, in Hippocrates the drugs were in contrast to the diet. While the diet was to help health and rehabilitate physical conditions, the drugs were to result in strong effects of change. The drugs like as poison, eye-salve, ointment were to be made use of for rapid, effective change of physical state or for the treatment of a concrete, limited part of the body, These drugs were also much developed in the Hellenistic Age of the state of chronic war. In initial stages, the toxical drugs as well as the anatomy and surgical operations must have been developed on peaceful purpose, such like as 'theriaca' detoxicating (antidoting) animal's poison, or for easing childbirth. With the increasement of social inequality and unexhausted human desire, however, the toxical drugs or anatomical knowledges got to be used for undesirable purposes. Thus, we can not estimate Hippocrates simply in the point whether he developed scientific medicine or not. The great fame of Hippocrates could be found rather in his method of medical treatment as well as the principle of medicine, as he believed that the medicine should not be exploited for worldly power or wealth but for the convenience of all the people. He pursued healthy life matching to natural

  2. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    Science.gov (United States)

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  3. Simplification of antiretroviral therapy: a necessary step in the public health response to HIV/AIDS in resource-limited settings.

    Science.gov (United States)

    Vitoria, Marco; Ford, Nathan; Doherty, Meg; Flexner, Charles

    2014-01-01

    The global scale-up of antiretroviral therapy (ART) over the past decade represents one of the great public health and human rights achievements of recent times. Moving from an individualized treatment approach to a simplified and standardized public health approach has been critical to ART scale-up, simplifying both prescribing practices and supply chain management. In terms of the latter, the risk of stock-outs can be reduced and simplified prescribing practices support task shifting of care to nursing and other non-physician clinicians; this strategy is critical to increase access to ART care in settings where physicians are limited in number. In order to support such simplification, successive World Health Organization guidelines for ART in resource-limited settings have aimed to reduce the number of recommended options for first-line ART in such settings. Future drug and regimen choices for resource-limited settings will likely be guided by the same principles that have led to the recommendation of a single preferred regimen and will favour drugs that have the following characteristics: minimal risk of failure, efficacy and tolerability, robustness and forgiveness, no overlapping resistance in treatment sequencing, convenience, affordability, and compatibility with anti-TB and anti-hepatitis treatments.

  4. Orthogonality-condition model for bound states with a separable expansion of the potential

    International Nuclear Information System (INIS)

    Pal, K.F.

    1984-01-01

    A very efficient solution of the equation of Saito's orthogonality-condition model (OCM) is reported for bound states by means of a separable expansion of the potential (PSE method). Some simplifications of the published formulae of the PSE method is derived, which facilitate its application to the OCM and may be useful in solving the Schroedinger equation as well. (author)

  5. Mathematical Model of the Jet Engine Fuel System

    Directory of Open Access Journals (Sweden)

    Klimko Marek

    2015-01-01

    Full Text Available The paper discusses the design of a simplified mathematical model of the jet (turbo-compressor engine fuel system. The solution will be based on the regulation law, where the control parameter is a fuel mass flow rate and the regulated parameter is the rotational speed. A differential equation of the jet engine and also differential equations of other fuel system components (fuel pump, throttle valve, pressure regulator will be described, with respect to advanced predetermined simplifications.

  6. Mathematical Model of the Jet Engine Fuel System

    Science.gov (United States)

    Klimko, Marek

    2015-05-01

    The paper discusses the design of a simplified mathematical model of the jet (turbo-compressor) engine fuel system. The solution will be based on the regulation law, where the control parameter is a fuel mass flow rate and the regulated parameter is the rotational speed. A differential equation of the jet engine and also differential equations of other fuel system components (fuel pump, throttle valve, pressure regulator) will be described, with respect to advanced predetermined simplifications.

  7. The Formalization of the Business Process Modeling Goals

    OpenAIRE

    Bušinska, Ligita; Kirikova, Mārīte

    2016-01-01

    In business process modeling the de facto standard BPMN has emerged. However, the applications of this notation have many subsets of elements and various extensions. Also, BPMN still coincides with many other modeling languages, forming a large set of available options for business process modeling languages and dialects. While, in general, the goal of modelers is a central notion in the choice of modeling languages and notations, in most researches that propose guidelines, techniques, and me...

  8. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    . The extended choice framework includes more behavioral richness through the explicit representation of the planning process preceding an action and its dynamics and the effects of context (family, friends, and market) on the process leading to a choice, as well as the inclusion of new types of subjective data...... in choice models. We discuss the key issues involved in applying the extended framework, focusing on richer data requirements, theories, and models, and present three partial demonstrations of the proposed framework. Future research challenges include the development of more comprehensive empirical tests...

  9. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  10. Business process modeling using Petri nets

    NARCIS (Netherlands)

    Hee, van K.M.; Sidorova, N.; Werf, van der J.M.E.M.; Jensen, K.; Aalst, van der W.M.P.; Balbo, G.; Koutny, M.; Wolf, K.

    2013-01-01

    Business process modeling has become a standard activity in many organizations. We start with going back into the history and explain why this activity appeared and became of such importance for organizations to achieve their business targets. We discuss the context in which business process

  11. Reduction and technical simplification of testing protocol for walking based on repeatability analyses: An Interreg IVa pilot study

    Directory of Open Access Journals (Sweden)

    Nejc Sarabon

    2010-12-01

    Full Text Available The aim of this study was to define the most appropriate gait measurement protocols to be used in our future studies in the Mobility in Ageing project. A group of young healthy volunteers took part in the study. Each subject carried out a 10-metre walking test at five different speeds (preferred, very slow, very fast, slow, and fast. Each walking speed was repeated three times, making a total of 15 trials which were carried out in a random order. Each trial was simultaneously analysed by three observers using three different technical approaches: a stop watch, photo cells and electronic kinematic dress. In analysing the repeatability of the trials, the results showed that of the five self-selected walking speeds, three of them (preferred, very fast, and very slow had a significantly higher repeatability of the average walking velocity, step length and cadence than the other two speeds. Additionally, the data showed that one of the three technical methods for gait assessment has better metric characteristics than the other two. In conclusion, based on repeatability, technical and organizational simplification, this study helped us to successfully define a simple and reliable walking test to be used in the main study of the project.

  12. Modeling spatial processes with unknown extremal dependence class

    KAUST Repository

    Huser, Raphaë l G.; Wadsworth, Jennifer L.

    2017-01-01

    Many environmental processes exhibit weakening spatial dependence as events become more extreme. Well-known limiting models, such as max-stable or generalized Pareto processes, cannot capture this, which can lead to a preference for models

  13. Process modeling study of the CIF incinerator

    International Nuclear Information System (INIS)

    Hang, T.

    1995-01-01

    The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment

  14. Parameter identification in multinomial processing tree models

    NARCIS (Netherlands)

    Schmittmann, V.D.; Dolan, C.V.; Raijmakers, M.E.J.; Batchelder, W.H.

    2010-01-01

    Multinomial processing tree models form a popular class of statistical models for categorical data that have applications in various areas of psychological research. As in all statistical models, establishing which parameters are identified is necessary for model inference and selection on the basis

  15. Study on a Process-oriented Knowledge Management Model

    OpenAIRE

    Zhang, Lingling; Li, Jun; Zheng, Xiuyu; Li, Xingsen; Shi, Yong

    2007-01-01

    Now knowledge has become the most important resource of enterprises. Process-oriented knowledge management (POKM) is a new and valuable research field. It may be the most practical method to deal with difficulties in knowledge management. The paper analyzes background, hypothesis and proposes of POKM, define the process knowledge, and give a process-oriented knowledge management model. The model integrates knowledge, process, human, and technology. It can improve the decision support capabili...

  16. Soil process-oriented modelling of within-field variability based on high-resolution 3D soil type distribution maps.

    Science.gov (United States)

    Bönecke, Eric; Lück, Erika; Gründling, Ralf; Rühlmann, Jörg; Franko, Uwe

    2016-04-01

    Today, the knowledge of within-field variability is essential for numerous purposes, including practical issues, such as precision and sustainable soil management. Therefore, process-oriented soil models have been applied for a considerable time to answer question of spatial soil nutrient and water dynamics, although, they can only be as consistent as their variation and resolution of soil input data. Traditional approaches, describe distribution of soil types, soil texture or other soil properties for greater soil units through generalised point information, e.g. from classical soil survey maps. Those simplifications are known to be afflicted with large uncertainties. Varying soil, crop or yield conditions are detected even within such homogenised soil units. However, recent advances of non-invasive soil survey and on-the-go monitoring techniques, made it possible to obtain vertical and horizontal dense information (3D) about various soil properties, particularly soil texture distribution which serves as an essential soil key variable affecting various other soil properties. Thus, in this study we based our simulations on detailed 3D soil type distribution (STD) maps (4x4 m) to adjacently built-up sufficient informative soil profiles including various soil physical and chemical properties. Our estimates of spatial STD are based on high-resolution lateral and vertical changes of electrical resistivity (ER), detected by a relatively new multi-sensor on-the-go ER monitoring device. We performed an algorithm including fuzzy-c-mean (FCM) logic and traditional soil classification to estimate STD from those inverted and layer-wise available ER data. STD is then used as key input parameter for our carbon, nitrogen and water transport model. We identified Pedological horizon depths and inferred hydrological soil variables (field capacity, permanent wilting point) from pedotransferfunctions (PTF) for each horizon. Furthermore, the spatial distribution of soil organic carbon

  17. Integration process and logistics results

    International Nuclear Information System (INIS)

    2004-01-01

    The Procurement and Logistics functions have gone through a process of integration since the beginning of integrated management of Asco and Vandellos II up to the present. These are functions that are likely to be designed for delivering a single product to the rest of the organization, defined from a high level of expectations, and that admit simplifications and materialization of synergy's as they are approached from an integrated perspective. The analyzed functions are as follows: Service and Material Purchasing, Warehouse and Material Management, and Documentation and General Services Management. In all case, to accomplish the integration, objectives, procedures and information systems were unified. As for the organization, a decision was made in each case on whether or not to out source. The decisive corporate strategy to integrate, resulting in actions such as moving corporate headquarters to Vandellos II, corporate consolidation, regulation of employment and implementation of the ENDESA Group Economic Information System (SIE) , has shaped this process, which at present can be considered as practically complete. (Author)

  18. Comparing single- and dual-process models of memory development.

    Science.gov (United States)

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  19. Tritium permeation model for plasma facing components

    Science.gov (United States)

    Longhurst, G. R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.

  20. Tritium permeation model for plasma facing components

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included

  1. Modelling of additive manufacturing processes: a review and classification

    Science.gov (United States)

    Stavropoulos, Panagiotis; Foteinopoulos, Panagis

    2018-03-01

    Additive manufacturing (AM) is a very promising technology; however, there are a number of open issues related to the different AM processes. The literature on modelling the existing AM processes is reviewed and classified. A categorization of the different AM processes in process groups, according to the process mechanism, has been conducted and the most important issues are stated. Suggestions are made as to which approach is more appropriate according to the key performance indicator desired to be modelled and a discussion is included as to the way that future modelling work can better contribute to improving today's AM process understanding.

  2. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  3. Analysis of Using Resources in Business Process Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Vasilecas Olegas

    2014-12-01

    Full Text Available One of the key purposes of Business Process Model and Notation (BPMN is to support graphical representation of the process model. However, such models have a lack of support for the graphical representation of resources, whose processes are used during simulation or execution of process instance. The paper analyzes different methods and their extensions for resource modeling. Further, this article presents a selected set of resource properties that are relevant for resource modeling. The paper proposes an approach that explains how to use the selected set of resource properties for extension of process modeling using BPMN and simulation tools. They are based on BPMN, where business process instances use resources in a concurrency manner.

  4. Extended Hubbard models for ultracold atoms in optical lattices

    International Nuclear Information System (INIS)

    Juergensen, Ole

    2015-01-01

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  5. Extended Hubbard models for ultracold atoms in optical lattices

    Energy Technology Data Exchange (ETDEWEB)

    Juergensen, Ole

    2015-06-05

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  6. Rainbow tensor model with enhanced symmetry and extreme melonic dominance

    Directory of Open Access Journals (Sweden)

    H. Itoyama

    2017-08-01

    Full Text Available We introduce and briefly analyze the rainbow tensor model where all planar diagrams are melonic. This leads to considerable simplification of the large N limit as compared to that of the matrix model: in particular, what are dressed in this limit are propagators only, which leads to an oversimplified closed set of Schwinger–Dyson equations for multi-point correlators. We briefly touch upon the Ward identities, the substitute of the spectral curve and the AMM/EO topological recursion and their possible connections to Connes–Kreimer theory and forest formulas.

  7. Rainbow tensor model with enhanced symmetry and extreme melonic dominance

    Science.gov (United States)

    Itoyama, H.; Mironov, A.; Morozov, A.

    2017-08-01

    We introduce and briefly analyze the rainbow tensor model where all planar diagrams are melonic. This leads to considerable simplification of the large N limit as compared to that of the matrix model: in particular, what are dressed in this limit are propagators only, which leads to an oversimplified closed set of Schwinger-Dyson equations for multi-point correlators. We briefly touch upon the Ward identities, the substitute of the spectral curve and the AMM/EO topological recursion and their possible connections to Connes-Kreimer theory and forest formulas.

  8. Simplification to abacavir/lamivudine + atazanavir maintains viral suppression and improves bone and renal biomarkers in ASSURE, a randomized, open label, non-inferiority trial.

    Directory of Open Access Journals (Sweden)

    David A Wohl

    Full Text Available Simplification of antiretroviral therapy in patients with suppressed viremia may minimize long-term adverse effects. The study's primary objective was to determine whether abacavir/lamivudine + atazanavir (ABC/3TC+ATV was virologically non-inferior to tenofovir/emtricitabine + atazanavir/ritonavir (TDF/FTC+ATV/r over 24 weeks in a population of virologically suppressed, HIV-1 infected patients.This open-label, multicenter, non-inferiority study enrolled antiretroviral experienced, HIV-infected adults currently receiving a regimen of TDF/FTC+ATV/r for ≥ 6 months with no history of virologic failure and whose HIV-1 RNA had been ≤ 75 copies/mL on 2 consecutive measurements including screening. Patients were randomized 1 ∶ 2 to continue current treatment or simplify to ABC/3TC+ATV.The primary endpoint was the proportion of patients with HIV-RNA<50 copies/mL at Week 24 by the Time to Loss of Virologic Response (TLOVR algorithm. Secondary endpoints included alternative measures of efficacy, adverse events (AEs, and fasting lipids. Exploratory endpoints included inflammatory, coagulation, bone, and renal biomarkers.After 24 weeks, ABC/3TC+ATV (n = 199 was non-inferior to TDF/FTC+ATV/r (n = 97 by both the primary analysis (87% in both groups and all secondary efficacy analyses. Rates of grade 2-4 AEs were similar between the two groups (40% vs 37%, respectively, but an excess of hyperbilirubinemia made the rate of grade 3-4 laboratory abnormalities higher in the TDF/FTC+ATV/r group (30% compared with the ABC/3TC+ATV group (13%. Lipid levels were stable except for HDL cholesterol, which increased significantly in the ABC/3TC+ATV group. Bone and renal biomarkers improved significantly between baseline and Week 24 in patients taking ABC/3TC+ATV, and the difference between groups was significant at Week 24. No significant changes occurred in any inflammatory or coagulation biomarker within or between treatment groups.After 24 weeks, simplification to

  9. Social Models: Blueprints or Processes?

    Science.gov (United States)

    Little, Graham R.

    1981-01-01

    Discusses the nature and implications of two different models for societal planning: (1) the problem-solving process approach based on Karl Popper; and (2) the goal-setting "blueprint" approach based on Karl Marx. (DC)

  10. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  11. The Structured Process Modeling Method (SPMM) : what is the best way for me to construct a process model?

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Gailly, F.; Grefen, P.W.P.J.; Poels, G.

    2017-01-01

    More and more organizations turn to the construction of process models to support strategical and operational tasks. At the same time, reports indicate quality issues for a considerable part of these models, caused by modeling errors. Therefore, the research described in this paper investigates the

  12. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  13. Demonstration of an N7 integrated fab process for metal oxide EUV photoresist

    Science.gov (United States)

    De Simone, Danilo; Mao, Ming; Kocsis, Michael; De Schepper, Peter; Lazzarino, Frederic; Vandenberghe, Geert; Stowers, Jason; Meyers, Stephen; Clark, Benjamin L.; Grenville, Andrew; Luong, Vinh; Yamashita, Fumiko; Parnell, Doni

    2016-03-01

    Inpria has developed a directly patternable metal oxide hard-mask as a robust, high-resolution photoresist for EUV lithography. In this paper we demonstrate the full integration of a baseline Inpria resist into an imec N7 BEOL block mask process module. We examine in detail both the lithography and etch patterning results. By leveraging the high differential etch resistance of metal oxide photoresists, we explore opportunities for process simplification and cost reduction. We review the imaging results from the imec N7 block mask patterns and its process windows as well as routes to maximize the process latitude, underlayer integration, etch transfer, cross sections, etch equipment integration from cross metal contamination standpoint and selective resist strip process. Finally, initial results from a higher sensitivity Inpria resist are also reported. A dose to size of 19 mJ/cm2 was achieved to print pillars as small as 21nm.

  14. Multivariate Product-Shot-noise Cox Point Process Models

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge

    We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....

  15. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  16. Simple Models for Process Control

    Czech Academy of Sciences Publication Activity Database

    Gorez, R.; Klán, Petr

    2011-01-01

    Roč. 22, č. 2 (2011), s. 58-62 ISSN 0929-2268 Institutional research plan: CEZ:AV0Z10300504 Keywords : process model s * PID control * second order dynamics Subject RIV: JB - Sensors, Measurment, Regulation

  17. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  18. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  19. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  20. Welding process modelling and control

    Science.gov (United States)

    Romine, Peter L.; Adenwala, Jinen A.

    1993-01-01

    The research and analysis performed, and software developed, and hardware/software recommendations made during 1992 in development of the PC-based data acquisition system for support of Welding Process Modeling and Control is reported. A need was identified by the Metals Processing Branch of NASA Marshall Space Flight Center, for a mobile data aquisition and analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC-based system was chosen. The Welding Measurement System (WMS) is a dedicated instrument, strictly for the use of data aquisition and analysis. Although the WMS supports many of the functions associated with the process control, it is not the intention for this system to be used for welding process control.