WorldWideScience

Sample records for mass computation model

  1. A computational model to generate simulated three-dimensional breast masses

    Energy Technology Data Exchange (ETDEWEB)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N., E-mail: wernick@iit.edu [Medical Imaging Research Center, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois 60616 (United States); Schmidt, Robert A. [Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Nishikawa, Robert M. [Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213 (United States)

    2015-02-15

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  2. A computational model to generate simulated three-dimensional breast masses

    International Nuclear Information System (INIS)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N.; Schmidt, Robert A.; Nishikawa, Robert M.

    2015-01-01

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  3. A Generative Computer Model for Preliminary Design of Mass Housing

    Directory of Open Access Journals (Sweden)

    Ahmet Emre DİNÇER

    2014-05-01

    Full Text Available Today, we live in what we call the “Information Age”, an age in which information technologies are constantly being renewed and developed. Out of this has emerged a new approach called “Computational Design” or “Digital Design”. In addition to significantly influencing all fields of engineering, this approach has come to play a similar role in all stages of the design process in the architectural field. In providing solutions for analytical problems in design such as cost estimate, circulation systems evaluation and environmental effects, which are similar to engineering problems, this approach is being used in the evaluation, representation and presentation of traditionally designed buildings. With developments in software and hardware technology, it has evolved as the studies based on design of architectural products and production implementations with digital tools used for preliminary design stages. This paper presents a digital model which may be used in the preliminary stage of mass housing design with Cellular Automata, one of generative design systems based on computational design approaches. This computational model, developed by scripts of 3Ds Max software, has been implemented on a site plan design of mass housing, floor plan organizations made by user preferences and facade designs. By using the developed computer model, many alternative housing types could be rapidly produced. The interactive design tool of this computational model allows the user to transfer dimensional and functional housing preferences by means of the interface prepared for model. The results of the study are discussed in the light of innovative architectural approaches.

  4. Structural characterisation of medically relevant protein assemblies by integrating mass spectrometry with computational modelling.

    Science.gov (United States)

    Politis, Argyris; Schmidt, Carla

    2018-03-20

    Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  5. Computing Mass Properties From AutoCAD

    Science.gov (United States)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  6. Computational Modelling of the Structural Integrity following Mass-Loss in Polymeric Charred Cellular Solids

    OpenAIRE

    J. P. M. Whitty; J. Francis; J. Howe; B. Henderson

    2014-01-01

    A novel computational technique is presented for embedding mass-loss due to burning into the ANSYS finite element modelling code. The approaches employ a range of computational modelling methods in order to provide more complete theoretical treatment of thermoelasticity absent from the literature for over six decades. Techniques are employed to evaluate structural integrity (namely, elastic moduli, Poisson’s ratios, and compressive brittle strength) of honeycomb systems known to approximate t...

  7. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  8. PORFLO - a continuum model for fluid flow, heat transfer, and mass transport in porous media. Model theory, numerical methods, and computational tests

    International Nuclear Information System (INIS)

    Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.

    1985-09-01

    Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs

  9. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    Science.gov (United States)

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  10. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    Directory of Open Access Journals (Sweden)

    Karl T Bates

    Full Text Available Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize

  11. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents.

    Science.gov (United States)

    Zhang, Jing; Lo, Joseph Y; Kuzmiak, Cherie M; Ghate, Sujata V; Yoon, Sora C; Mazurowski, Maciej A

    2014-09-01

    Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (perror

  12. Introduction to computational mass transfer with applications to chemical engineering

    CERN Document Server

    Yu, Kuo-Tsung

    2017-01-01

    This book offers an easy-to-understand introduction to the computational mass transfer (CMT) method. On the basis of the contents of the first edition, this new edition is characterized by the following additional materials. It describes the successful application of this method to the simulation of the mass transfer process in a fluidized bed, as well as recent investigations and computing methods for predictions for the multi-component mass transfer process. It also demonstrates the general issues concerning computational methods for simulating the mass transfer of the rising bubble process. This new edition has been reorganized by moving the preparatory materials for Computational Fluid Dynamics (CFD) and Computational Heat Transfer into appendices, additions of new chapters, and including three new appendices on, respectively, generalized representation of the two-equation model for the CMT, derivation of the equilibrium distribution function in the lattice-Boltzmann method, and derivation of the Navier-S...

  13. Introduction to computational mass transfer with applications to chemical engineering

    CERN Document Server

    Yu, Kuo-Tsong

    2014-01-01

    This book presents a new computational methodology called Computational Mass Transfer (CMT). It offers an approach to rigorously simulating the mass, heat and momentum transfer under turbulent flow conditions with the help of two newly published models, namely the C’2—εC’ model and the Reynolds  mass flux model, especially with regard to predictions of concentration, temperature and velocity distributions in chemical and related processes. The book will also allow readers to understand the interfacial phenomena accompanying the mass transfer process and methods for modeling the interfacial effect, such as the influences of Marangoni convection and Rayleigh convection. The CMT methodology is demonstrated by means of its applications to typical separation and chemical reaction processes and equipment, including distillation, absorption, adsorption and chemical reactors. Professor Kuo-Tsong Yu is a Member of the Chinese Academy of Sciences. Dr. Xigang Yuan is a Professor at the School of Chemical Engine...

  14. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  15. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    International Nuclear Information System (INIS)

    Zhang, Jing; Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  16. Vehicle - Bridge interaction, comparison of two computing models

    Science.gov (United States)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  17. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    Science.gov (United States)

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  18. Computer programs for the numerical modelling of water flow in rock masses

    International Nuclear Information System (INIS)

    Croney, P.; Richards, L.R.

    1985-08-01

    Water flow in rock joints provides a very important possible route for the migration of radio-nuclides from radio-active waste within a repository back to the biosphere. Two computer programs DAPHNE and FPM have been developed to model two dimensional fluid flow in jointed rock masses. They have been developed to run on microcomputer systems suitable for field locations. The fluid flows in a number of jointed rock systems have been examined and certain controlling functions identified. A methodology has been developed for assessing the anisotropic permeability of jointed rock. A number of examples of unconfined flow into surface and underground openings have been analysed and ground water lowering, pore water pressures and flow quantities predicted. (author)

  19. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    Science.gov (United States)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  20. Computation of the velocity field and mass balance in the finite-element modeling of groundwater flow

    International Nuclear Information System (INIS)

    Yeh, G.T.

    1980-01-01

    Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem

  1. Standard Model mass spectrum in inflationary universe

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2017-04-11

    We work out the Standard Model (SM) mass spectrum during inflation with quantum corrections, and explore its observable consequences in the squeezed limit of non-Gaussianity. Both non-Higgs and Higgs inflation models are studied in detail. We also illustrate how some inflationary loop diagrams can be computed neatly by Wick-rotating the inflation background to Euclidean signature and by dimensional regularization.

  2. A fast mass spring model solver for high-resolution elastic objects

    Science.gov (United States)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  3. Many Masses on One Stroke:. Economic Computation of Quark Propagators

    Science.gov (United States)

    Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus

    The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.

  4. Turbulence modeling for mass transfer enhancement by separation and reattachment with two-equation eddy-viscosity models

    International Nuclear Information System (INIS)

    Xiong Jinbiao; Koshizuka, Seiichi; Sakai, Mikio

    2011-01-01

    Highlights: → We selected and evaluated five two-equation eddy-viscosity turbulence models for modeling the separated and reattaching flow. → The behavior of the models in the simple flow is not consistent with that in the separated and reattaching flow. → The Abe-Kondoh-Nagano model is the best one among the selected model. → Application of the stress limiter and the Kato-Launder modification in the Abe-Kondoh-Nagano model helps to improve prediction of the peak mass transfer coefficient in the orifice flow. → The value of turbulent Schmidt number is investigated. - Abstract: The prediction of mass transfer rate is one of the key elements for estimation of the flow accelerated corrosion (FAC) rate. Three low Reynolds number (LRN) k-ε models (Lam-Bremhorst (LB), Abe-Kondoh-Nagano (AKN) and Hwang-Lin (HL)), one LRN k-ω (Wilcox, WX) model and the k-ω SST model are tested for the computation of the high Schmidt number mass transfer, especially in the flow through an orifice. The models are tested in the computation of three types of flow: (1) the fully developed pipe flow, (2) the flow over a backward facing step, (3) the flow through an orifice. The HL model shows a good performance in predicting mass transfer in the fully developed pipe flow but fails to give reliable prediction in the flow through an orifice. The WX model and the k-ω SST model underpredict the mass transfer rate in the flow types 1 and 3. The LB model underestimates the mass transfer in the flow type 1, but shows abnormal behavior at the reattaching point in type 3. Synthetically evaluating all the models in all the computed case, the AKN model is the best one; however, the prediction is still not satisfactory. In the evaluation in the flow over a backward facing step shows k-ω SST model shows superior performance. This is interpreted as an implication that the combination of the k-ε model and the stress limiter can improve the model behavior in the recirculation bubble. Both the

  5. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  6. A review of Higgs mass calculations in supersymmetric models

    DEFF Research Database (Denmark)

    Draper, P.; Rzehak, H.

    2016-01-01

    The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...

  7. Anatomy of Higgs mass in supersymmetric inverse seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Eung Jin, E-mail: ejchun@kias.re.kr [Korea Institute for Advanced Study, Seoul 130-722 (Korea, Republic of); Mummidi, V. Suryanarayana, E-mail: soori9@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India); Vempati, Sudhir K., E-mail: vempati@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-09-07

    We compute the one loop corrections to the CP-even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of O(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmetry breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops.

  8. A mass-conserving multiphase lattice Boltzmann model for simulation of multiphase flows

    Science.gov (United States)

    Niu, Xiao-Dong; Li, You; Ma, Yi-Ren; Chen, Mu-Feng; Li, Xiang; Li, Qiao-Zhong

    2018-01-01

    In this study, a mass-conserving multiphase lattice Boltzmann (LB) model is proposed for simulating the multiphase flows. The proposed model developed in the present study is to improve the model of Shao et al. ["Free-energy-based lattice Boltzmann model for simulation of multiphase flows with density contrast," Phys. Rev. E 89, 033309 (2014)] by introducing a mass correction term in the lattice Boltzmann model for the interface. The model of Shao et al. [(the improved Zheng-Shu-Chew (Z-S-C model)] correctly considers the effect of the local density variation in momentum equation and has an obvious improvement over the Zheng-Shu-Chew (Z-S-C) model ["A lattice Boltzmann model for multiphase flows with large density ratio," J. Comput. Phys. 218(1), 353-371 (2006)] in terms of solution accuracy. However, due to the physical diffusion and numerical dissipation, the total mass of each fluid phase cannot be conserved correctly. To solve this problem, a mass correction term, which is similar to the one proposed by Wang et al. ["A mass-conserved diffuse interface method and its application for incompressible multiphase flows with large density ratio," J. Comput. Phys. 290, 336-351 (2015)], is introduced into the lattice Boltzmann equation for the interface to compensate the mass losses or offset the mass increase. Meanwhile, to implement the wetting boundary condition and the contact angle, a geometric formulation and a local force are incorporated into the present mass-conserving LB model. The proposed model is validated by verifying the Laplace law, simulating both one and two aligned droplets splashing onto a liquid film, droplets standing on an ideal wall, droplets with different wettability splashing onto smooth wax, and bubbles rising under buoyancy. Numerical results show that the proposed model can correctly simulate multiphase flows. It was found that the mass is well-conserved in all cases considered by the model developed in the present study. The developed

  9. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  10. A New Methodology for Fuel Mass Computation of an operating Aircraft

    Directory of Open Access Journals (Sweden)

    M Souli

    2016-03-01

    Full Text Available The paper performs a new computational methodology for an accurate computation of fuel mass inside an aircraft wing during the flight. The computation is carried out using hydrodynamic equations, classically known as Navier-Stokes equations by the CFD community. For this purpose, a computational software is developed, the software computes the fuel mass inside the tank based on experimental data of pressure gages that are inserted in the fuel tank. Actually and for safety reasons, Optical fiber sensor for fluid level sensor detection is used. The optical system consists to an optically controlled acoustic transceiver system which measures the fuel level inside the each compartment of the fuel tank. The system computes fuel volume inside the tank and needs density to compute the total fuel mass. Using optical sensor technique, density measurement inside the tank is required. The method developed in the paper, requires pressure measurements in each tank compartment, the density is then computed based on pressure measurements and hydrostatic assumptions. The methodology is tested using a fuel tank provided by Airbus for time history refueling process.

  11. Mass models for disk and halo components in spiral galaxies

    International Nuclear Information System (INIS)

    Athanassoula, E.; Bosma, A.

    1987-01-01

    The mass distribution in spiral galaxies is investigated by means of numerical simulations, summarizing the results reported by Athanassoula et al. (1986). Details of the modeling technique employed are given, including bulge-disk decomposition; computation of bulge and disk rotation curves (assuming constant mass/light ratios for each); and determination (for spherical symmetry) of the total halo mass out to the optical radius, the concentration indices, the halo-density power law, the core radius, the central density, and the velocity dispersion. Also discussed are the procedures for incorporating galactic gas and checking the spiral structure extent. It is found that structural constraints limit disk mass/light ratios to a range of 0.3 dex, and that the most likely models are maximum-disk models with m = 1 disturbances inhibited. 19 references

  12. Heat and mass transfer during the cryopreservation of a bioartificial liver device: a computational model.

    Science.gov (United States)

    Balasubramanian, Saravana K; Coger, Robin N

    2005-01-01

    Bioartificial liver devices (BALs) have proven to be an effective bridge to transplantation for cases of acute liver failure. Enabling the long-term storage of these devices using a method such as cryopreservation will ensure their easy off the shelf availability. To date, cryopreservation of liver cells has been attempted for both single cells and sandwich cultures. This study presents the potential of using computational modeling to help develop a cryopreservation protocol for storing the three dimensional BAL: Hepatassist. The focus is upon determining the thermal and concentration profiles as the BAL is cooled from 37 degrees C-100 degrees C, and is completed in two steps: a cryoprotectant loading step and a phase change step. The results indicate that, for the loading step, mass transfer controls the duration of the protocol, whereas for the phase change step, when mass transfer is assumed negligible, the latent heat released during freezing is the control factor. The cryoprotocol that is ultimately proposed considers time, cooling rate, and the temperature gradients that the cellular space is exposed to during cooling. To our knowledge, this study is the first reported effort toward designing an effective protocol for the cryopreservation of a three-dimensional BAL device.

  13. Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.

    1980-01-01

    This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs

  14. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  15. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  16. Three-dimensional two-phase mass transport model for direct methanol fuel cells

    International Nuclear Information System (INIS)

    Yang, W.W.; Zhao, T.S.; Xu, C.

    2007-01-01

    A three-dimensional (3D) steady-state model for liquid feed direct methanol fuel cells (DMFC) is presented in this paper. This 3D mass transport model is formed by integrating five sub-models, including a modified drift-flux model for the anode flow field, a two-phase mass transport model for the porous anode, a single-phase model for the polymer electrolyte membrane, a two-phase mass transport model for the porous cathode, and a homogeneous mist-flow model for the cathode flow field. The two-phase mass transport models take account the effect of non-equilibrium evaporation/ condensation at the gas-liquid interface. A 3D computer code is then developed based on the integrated model. After being validated against the experimental data reported in the literature, the code was used to investigate numerically transport behaviors at the DMFC anode and their effects on cell performance

  17. Material constitutive model for jointed rock mass behavior

    International Nuclear Information System (INIS)

    Thomas, R.K.

    1980-11-01

    A material constitutive model is presented for jointed rock masses which exhibit preferred planes of weakness. This model is intended for use in finite element computations. The immediate application is the thermomechanical modelling of a nuclear waste repository in hard rock, but the model seems appropriate for a variety of other static and dynamic geotechnical problems as well. Starting with the finite element representations of a two-dimensional elastic body, joint planes are introduced in an explicit manner by direct modification of the material stiffness matrix. A novel feature of this approach is that joint set orientations, lengths and spacings are readily assigned through the sampling of a population distribution statistically determined from field measurement data. The result is that the fracture characteristics of the formations have the same statistical distribution in the model as is observed in the field. As a demonstration of the jointed rock mass model, numerical results are presented for the example problem of stress concentration at an underground opening

  18. Generalized added masses computation for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1983-01-01

    The aim of this paper a description of a method to simulate the dynamic effect of a fluid between two structures by means of an added mass and an added stiffness. The method is based on a potential theory which assumes the fluid is inviscid and incompressible (the case of compressibility is discussed); a solution of the corresponding field equation is given as a superposition of elementary conditions (i.e. applicable to elementary boundary conditions). Consequently the pressure and displacements of the fluid on the boundary are given as a function of the series coefficients; the ''work lost'' (i.e. the work done by the pressures on the difference between actual and estimated displacements) is minimized, in this way the expansion coefficients are related to the displacements on the boundaries. Virtual work procedures are then used to compute added masses. The particular case of a free surface (with gravity effects) is discussed, it is shown how the effect can be modelled by means of an added stiffness term. Some examples relative to vibrations in reservoirs are given and discussed. (orig.)

  19. Evolution, Nucleosynthesis, and Yields of AGB Stars at Different Metallicities. III. Intermediate-mass Models, Revised Low-mass Models, and the ph-FRUITY Interface

    Science.gov (United States)

    Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D.

    2015-08-01

    We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M⊙) at different metallicities (-2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M⊙ ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the 22Ne(α,n)25Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY).

  20. RSMASS-D nuclear thermal propulsion and bimodal system mass models

    Science.gov (United States)

    King, Donald B.; Marshall, Albert C.

    1997-01-01

    Two relatively simple models have been developed to estimate reactor, radiation shield, and balance of system masses for a particle bed reactor (PBR) nuclear thermal propulsion concept and a cermet-core power and propulsion (bimodal) concept. The approach was based on the methodology developed for the RSMASS-D models. The RSMASS-D approach for the reactor and shield sub-systems uses a combination of simple equations derived from reactor physics and other fundamental considerations along with tabulations of data from more detailed neutron and gamma transport theory computations. Relatively simple models are used to estimate the masses of other subsystem components of the nuclear propulsion and bimodal systems. Other subsystem components include instrumentation and control (I&C), boom, safety systems, radiator, thermoelectrics, heat pipes, and nozzle. The user of these models can vary basic design parameters within an allowed range to achieve a parameter choice which yields a minimum mass for the operational conditions of interest. Estimated system masses are presented for a range of reactor power levels for propulsion for the PBR propulsion concept and for both electrical power and propulsion for the cermet-core bimodal concept. The estimated reactor system masses agree with mass predictions from detailed calculations with xx percent for both models.

  1. Heavy quark effective theory computation of the mass of the bottom quark

    International Nuclear Information System (INIS)

    Della Morte, M.; Papinutto, M.

    2006-10-01

    We present a fully non-perturbative computation of the mass of the b-quark in the quenched approximation. Our strategy starts from the matching of HQET to QCD in a finite volume and finally relates the quark mass to the spin averaged mass of the B s meson in HQET. All steps include the terms of order Λ 2 /m b . We discuss the computation and renormalization of correlation functions at order 1/m b . With the strange quark mass fixed from the Kaon mass and the QCD scale set through r 0 =0.5 fm, we obtain a renormalization group invariant mass M b =6.758(86) GeV or anti m b (anti m b )=4.347(48) GeV in the MS scheme. The uncertainty in the computed Λ 2 /m b terms contributes little to the total error and Λ 3 /m 2 b terms are negligible. The strategy is promising for full QCD as well as for other B-physics observables. (orig.)

  2. Computing Models of M-type Host Stars and their Panchromatic Spectral Output

    Science.gov (United States)

    Linsky, Jeffrey; Tilipman, Dennis; France, Kevin

    2018-06-01

    We have begun a program of computing state-of-the-art model atmospheres from the photospheres to the coronae of M stars that are the host stars of known exoplanets. For each model we are computing the emergent radiation at all wavelengths that are critical for assessingphotochemistry and mass-loss from exoplanet atmospheres. In particular, we are computing the stellar extreme ultraviolet radiation that drives hydrodynamic mass loss from exoplanet atmospheres and is essential for determing whether an exoplanet is habitable. The model atmospheres are computed with the SSRPM radiative transfer/statistical equilibrium code developed by Dr. Juan Fontenla. The code solves for the non-LTE statistical equilibrium populations of 18,538 levels of 52 atomic and ion species and computes the radiation from all species (435,986 spectral lines) and about 20,000,000 spectral lines of 20 diatomic species.The first model computed in this program was for the modestly active M1.5 V star GJ 832 by Fontenla et al. (ApJ 830, 152 (2016)). We will report on a preliminary model for the more active M5 V star GJ 876 and compare this model and its emergent spectrum with GJ 832. In the future, we will compute and intercompare semi-empirical models and spectra for all of the stars observed with the HST MUSCLES Treasury Survey, the Mega-MUSCLES Treasury Survey, and additional stars including Proxima Cen and Trappist-1.This multiyear theory program is supported by a grant from the Space Telescope Science Institute.

  3. Computational mass spectrometry for small molecules

    Science.gov (United States)

    2013-01-01

    The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline. PMID:23453222

  4. The impact of mass gatherings and holiday traveling on the course of an influenza pandemic: a computational model.

    Science.gov (United States)

    Shi, Pengyi; Keskinocak, Pinar; Swann, Julie L; Lee, Bruce Y

    2010-12-21

    During the 2009 H1N1 influenza pandemic, concerns arose about the potential negative effects of mass public gatherings and travel on the course of the pandemic. Better understanding the potential effects of temporal changes in social mixing patterns could help public officials determine if and when to cancel large public gatherings or enforce regional travel restrictions, advisories, or surveillance during an epidemic. We develop a computer simulation model using detailed data from the state of Georgia to explore how various changes in social mixing and contact patterns, representing mass gatherings and holiday traveling, may affect the course of an influenza pandemic. Various scenarios with different combinations of the length of the mass gatherings or traveling period (range: 0.5 to 5 days), the proportion of the population attending the mass gathering events or on travel (range: 1% to 50%), and the initial reproduction numbers R0 (1.3, 1.5, 1.8) are explored. Mass gatherings that occur within 10 days before the epidemic peak can result in as high as a 10% relative increase in the peak prevalence and the total attack rate, and may have even worse impacts on local communities and travelers' families. Holiday traveling can lead to a second epidemic peak under certain scenarios. Conversely, mass traveling or gatherings may have little effect when occurring much earlier or later than the epidemic peak, e.g., more than 40 days earlier or 20 days later than the peak when the initial R0 = 1.5. Our results suggest that monitoring, postponing, or cancelling large public gatherings may be warranted close to the epidemic peak but not earlier or later during the epidemic. Influenza activity should also be closely monitored for a potential second peak if holiday traveling occurs when prevalence is high.

  5. Heavy quark effective theory computation of the mass of the bottom quark

    Energy Technology Data Exchange (ETDEWEB)

    Della Morte, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Garron, N.; Sommer, R. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Papinutto, M. [INFN Sezione di Roma Tre, Rome (Italy)

    2006-10-15

    We present a fully non-perturbative computation of the mass of the b-quark in the quenched approximation. Our strategy starts from the matching of HQET to QCD in a finite volume and finally relates the quark mass to the spin averaged mass of the B{sub s} meson in HQET. All steps include the terms of order {lambda}{sup 2}/m{sub b}. We discuss the computation and renormalization of correlation functions at order 1/m{sub b}. With the strange quark mass fixed from the Kaon mass and the QCD scale set through r{sub 0}=0.5 fm, we obtain a renormalization group invariant mass M{sub b}=6.758(86) GeV or anti m{sub b}(anti m{sub b})=4.347(48) GeV in the MS scheme. The uncertainty in the computed {lambda}{sup 2}/m{sub b} terms contributes little to the total error and {lambda}{sup 3}/m{sup 2}{sub b} terms are negligible. The strategy is promising for full QCD as well as for other B-physics observables. (orig.)

  6. Computing K and D meson masses with Nf=2+1+1 twisted mass lattice QCD

    International Nuclear Information System (INIS)

    Baron, Remi; Blossier, Benoit; Boucaud, Philippe

    2010-05-01

    We discuss the computation of the mass of the K and D mesons within the framework of N f =2+1+1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and charm quark masses to their physical values. In particular, we address the problems related to the twisted mass flavor and parity symmetry breaking, which arise when considering a non-degenerate (c,s) doublet. We propose and verify the consistency of three methods to extract the K and D meson masses in this framework. (orig.)

  7. Old star clusters: Bench tests of low mass stellar models

    Directory of Open Access Journals (Sweden)

    Salaris M.

    2013-03-01

    Full Text Available Old star clusters in the Milky Way and external galaxies have been (and still are traditionally used to constrain the age of the universe and the timescales of galaxy formation. A parallel avenue of old star cluster research considers these objects as bench tests of low-mass stellar models. This short review will highlight some recent tests of stellar evolution models that make use of photometric and spectroscopic observations of resolved old star clusters. In some cases these tests have pointed to additional physical processes efficient in low-mass stars, that are not routinely included in model computations. Moreover, recent results from the Kepler mission about the old open cluster NGC6791 are adding new tight constraints to the models.

  8. Modelling of heat and mass transfer processes in neonatology

    Energy Technology Data Exchange (ETDEWEB)

    Ginalski, Maciej K [FLUENT Europe, Sheffield Business Park, Europa Link, Sheffield S9 1XU (United Kingdom); Nowak, Andrzej J [Institute of Thermal Technology, Silesian University of Technology, Konarskiego 22, 44-100 Gliwice (Poland); Wrobel, Luiz C [School of Engineering and Design, Brunel University, Uxbridge UB8 3PH (United Kingdom)], E-mail: maciej.ginalski@ansys.com, E-mail: Andrzej.J.Nowak@polsl.pl, E-mail: luiz.wrobel@brunel.ac.uk

    2008-09-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices.

  9. Modelling of heat and mass transfer processes in neonatology

    International Nuclear Information System (INIS)

    Ginalski, Maciej K; Nowak, Andrzej J; Wrobel, Luiz C

    2008-01-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices

  10. Evolution models of helium white dwarf--main-sequence star merger remnants: the mass distribution of single low-mass white dwarfs

    OpenAIRE

    Zhang, Xianfei; Hall, Philip D.; Jeffery, C. Simon; Bi, Shaolan

    2017-01-01

    It is not known how single white dwarfs with masses less than 0.5Msolar -- low-mass white dwarfs -- are formed. One way in which such a white dwarf might be formed is after the merger of a helium-core white dwarf with a main-sequence star that produces a red giant branch star and fails to ignite helium. We use a stellar-evolution code to compute models of the remnants of these mergers and find a relation between the pre-merger masses and the final white dwarf mass. Combining our results with ...

  11. Isotopic analysis of plutonium by computer controlled mass spectrometry

    International Nuclear Information System (INIS)

    1974-01-01

    Isotopic analysis of plutonium chemically purified by ion exchange is achieved using a thermal ionization mass spectrometer. Data acquisition from and control of the instrument is done automatically with a dedicated system computer in real time with subsequent automatic data reduction and reporting. Separation of isotopes is achieved by varying the ion accelerating high voltage with accurate computer control

  12. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  13. The exact mass-gaps of the principal chiral models

    CERN Document Server

    Hollowood, Timothy J

    1994-01-01

    An exact expression for the mass-gap, the ratio of the physical particle mass to the $\\Lambda$-parameter, is found for the principal chiral sigma models associated to all the classical Lie algebras. The calculation is based on a comparison of the free-energy in the presence of a source coupling to a conserved charge of the theory computed in two ways: via the thermodynamic Bethe Ansatz from the exact scattering matrix and directly in perturbation theory. The calculation provides a non-trivial test of the form of the exact scattering matrix.

  14. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  15. Upper Higgs boson mass bounds from a chirally invariant lattice Higgs-Yukawa Model

    Energy Technology Data Exchange (ETDEWEB)

    Gerhold, P. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; John von Neumann-Institut fuer Computing NIC/DESY, Zeuthen (Germany); Jansen, K. [John von Neumann-Institut fuer Computing NIC/DESY, Zeuthen (Germany)

    2010-02-15

    We establish the cutoff-dependent upper Higgs boson mass bound by means of direct lattice computations in the framework of a chirally invariant lattice Higgs-Yukawa model emulating the same chiral Yukawa coupling structure as in the Higgs-fermion sector of the Standard Model. As expected from the triviality picture of the Higgs sector, we observe the upper mass bound to decrease with rising cutoff parameter {lambda}. Moreover, the strength of the fermionic contribution to the upper mass bound is explored by comparing to the corresponding analysis in the pure {phi}{sup 4}-theory. (orig.)

  16. EVOLUTION, NUCLEOSYNTHESIS, AND YIELDS OF AGB STARS AT DIFFERENT METALLICITIES. III. INTERMEDIATE-MASS MODELS, REVISED LOW-MASS MODELS, AND THE pH-FRUITY INTERFACE

    Energy Technology Data Exchange (ETDEWEB)

    Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D. [INAF-Osservatorio Astronomico di Collurania, I-64100 Teramo (Italy)

    2015-08-15

    We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M{sub ⊙}) at different metallicities (−2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M{sub ⊙} ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the {sup 22}Ne(α,n){sup 25}Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY)

  17. A quasi-particle model for computational nuclei

    International Nuclear Information System (INIS)

    Boal, D.H.; Glosli, J.N.

    1988-03-01

    A model Hamiltonian is derived which provides a computationally efficient means of representing nuclei. The Hamiltonian includes both coulomb and isospin dependent terms, and incorporates antisymmetrization effects through a momentum dependent potential. Unlike many other classical or semiclassical models, the nuclei of this simulation have a well-defined ground state with a a non-vanishing 2 >. It is shown that the binding energies per nucleon and r.m.s. radii of these ground states are close to the measured values over a wide mass range

  18. Computer Aided Detection of Breast Masses in Digital Tomosynthesis

    National Research Council Canada - National Science Library

    Singh, Swatee; Lo, Joseph

    2008-01-01

    The purpose of this study was to investigate feasibility of computer-aided detection of masses and calcification clusters in breast tomosynthesis images and obtain reliable estimates of sensitivity...

  19. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  20. Computing K and D meson masses with N-f=2+1+1 twisted mass lattice QCD

    NARCIS (Netherlands)

    Baron, Remi; Boucaud, Philippe; Carbonell, Jaume; Drach, Vincent; Farchioni, Federico; Herdoiza, Gregorio; Jansen, Karl; Michael, Chris; Montvay, Istvan; Pallante, Elisabetta; Pene, Olivier; Reker, Siebren; Urbach, Carsten; Wagner, Marc; Wenger, Urs

    We discuss the computation of the mass of the K and D mesons within the framework of N-f = 2 + 1 + 1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and

  1. Evolution models of helium white dwarf-main-sequence star merger remnants: the mass distribution of single low-mass white dwarfs

    Science.gov (United States)

    Zhang, Xianfei; Hall, Philip D.; Jeffery, C. Simon; Bi, Shaolan

    2018-02-01

    It is not known how single white dwarfs with masses less than 0.5Msolar -- low-mass white dwarfs -- are formed. One way in which such a white dwarf might be formed is after the merger of a helium-core white dwarf with a main-sequence star that produces a red giant branch star and fails to ignite helium. We use a stellar-evolution code to compute models of the remnants of these mergers and find a relation between the pre-merger masses and the final white dwarf mass. Combining our results with a model population, we predict that the mass distribution of single low-mass white dwarfs formed through this channel spans the range 0.37 to 0.5Msolar and peaks between 0.45 and 0.46Msolar. Helium white dwarf--main-sequence star mergers can also lead to the formation of single helium white dwarfs with masses up to 0.51Msolar. In our model the Galactic formation rate of single low-mass white dwarfs through this channel is about 8.7X10^-3yr^-1. Comparing our models with observations, we find that the majority of single low-mass white dwarfs (<0.5Msolar) are formed from helium white dwarf--main-sequence star mergers, at a rate which is about $2$ per cent of the total white dwarf formation rate.

  2. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  3. Development of a miniaturized mass-flow meter for an axial flow blood pump based on computational analysis.

    Science.gov (United States)

    Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-09-01

    In order to monitor the condition of patients with implantable left ventricular assist systems (LVAS), it is important to measure pump flow rate continuously and noninvasively. However, it is difficult to measure the pump flow rate, especially in an implantable axial flow blood pump, because the power consumption has neither linearity nor uniqueness with regard to the pump flow rate. In this study, a miniaturized mass-flow meter for discharged patients with an implantable axial blood pump was developed on the basis of computational analysis, and was evaluated in in-vitro tests. The mass-flow meter makes use of centrifugal force produced by the mass-flow rate around a curved cannula. An optimized design was investigated by use of computational fluid dynamics (CFD) analysis. On the basis of the computational analysis, a miniaturized mass-flow meter made of titanium alloy was developed. A strain gauge was adopted as a sensor element. The first strain gauge, attached to the curved area, measured both static pressure and centrifugal force. The second strain gauge, attached to the straight area, measured static pressure. By subtracting the output of the second strain gauge from the output of the first strain gauge, the mass-flow rate was determined. In in-vitro tests using a model circulation loop, the mass-flow meter was compared with a conventional flow meter. Measurement error was less than ±0.5 L/min and average time delay was 0.14 s. We confirmed that the miniaturized mass-flow meter could accurately measure the mass-flow rate continuously and noninvasively.

  4. Application research of computational mass-transfer differential equation in MBR concentration field simulation.

    Science.gov (United States)

    Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan

    2016-01-01

    After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.

  5. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    Science.gov (United States)

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  6. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  7. Improved methods for computing masses from numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  8. Running of radiative neutrino masses: the scotogenic model — revisited

    Energy Technology Data Exchange (ETDEWEB)

    Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)

    2015-11-23

    A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.

  9. Development and validation of a mass casualty conceptual model.

    Science.gov (United States)

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  10. Value of radio density determined by enhanced computed tomography for the differential diagnosis of lung masses

    International Nuclear Information System (INIS)

    Xie, Min

    2011-01-01

    Lung masses are often difficult to differentiate when their clinical symptoms and shapes or densities on computed tomography images are similar. However, with different pathological contents, they may appear differently on plain and enhanced computed tomography. Objectives: To determine the value of enhanced computed tomography for the differential diagnosis of lung masses based on the differences in radio density with and without enhancement. Patients and Methods: Thirty-six patients with lung cancer, 36 with pulmonary tuberculosis and 10 with inflammatory lung pseudo tumors diagnosed by computed tomography and confirmed by pathology in our hospital were selected. The mean ±SD radio densities of lung masses in the three groups of patients were calculated based on the results of plain and enhanced computed tomography. Results: There were no significant differences in the radio densities of the masses detected by plain computed tomography among patients with inflammatory lung pseudo tumors, tuberculosis and lung cancer (P> 0.05). However, there were significant differences (P< 0.01)between all the groups in terms of radio densities of masses detected by enhanced computed tomography. Conclusions: The radio densities of lung masses detected by enhanced computed tomography could potentially be used to differentiate between lung cancer, pulmonary tuberculosis and inflammatory lung pseudo tumors.

  11. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  12. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  13. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  14. Portable, remotely operated, computer-controlled, quadrupole mass spectrometer for field use

    International Nuclear Information System (INIS)

    Friesen, R.D.; Newton, J.C.; Smith, C.F.

    1982-04-01

    A portable, remote-controlled mass spectrometer was required at the Nevada Test Site to analyze prompt post-event gas from the nuclear cavity in support of the underground testing program. A Balzers QMG-511 quadrupole was chosen for its ability to be interfaced to a DEC LSI-11 computer and to withstand the ground movement caused by this field environment. The inlet system valves, the pumps, the pressure and temperature transducers, and the quadrupole mass spectrometer are controlled by a read-only-memory-based DEC LSI-11/2 with a high-speed microwave link to the control point which is typically 30 miles away. The computer at the control point is a DEC LSI-11/23 running the RSX-11 operating system. The instrument was automated as much as possible because the system is run by inexperienced operators at times. The mass spectrometer has been used on an initial field event with excellent performance. The gas analysis system is described, including automation by a novel computer control method which reduces operator errors and allows dynamic access to the system parameters

  15. Relativistic mean-field mass models

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Arteaga, D.; Goriely, S.; Chamel, N. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2016-10-15

    We present a new effort to develop viable mass models within the relativistic mean-field approach with density-dependent meson couplings, separable pairing and microscopic estimations for the translational and rotational correction energies. Two interactions, DD-MEB1 and DD-MEB2, are fitted to essentially all experimental masses, and also to charge radii and infinite nuclear matter properties as determined by microscopic models using realistic interactions. While DD-MEB1 includes the σ, ω and ρ meson fields, DD-MEB2 also considers the δ meson. Both mass models describe the 2353 experimental masses with a root mean square deviation of about 1.1 MeV and the 882 measured charge radii with a root mean square deviation of 0.029 fm. In addition, we show that the Pb isotopic shifts and moments of inertia are rather well reproduced, and the equation of state in pure neutron matter as well as symmetric nuclear matter are in relatively good agreement with existing realistic calculations. Both models predict a maximum neutron-star mass of more than 2.6 solar masses, and thus are able to accommodate the heaviest neutron stars observed so far. However, the new Lagrangians, like all previously determined RMF models, present the drawback of being characterized by a low effective mass, which leads to strong shell effects due to the strong coupling between the spin-orbit splitting and the effective mass. Complete mass tables have been generated and a comparison with other mass models is presented. (orig.)

  16. Fermion mass hierarchies in low-energy supergravity and superstring models

    International Nuclear Information System (INIS)

    Binetruy, P.

    1995-01-01

    We investigate the problem of the fermion mass hierarchy in supergravity models with flat directions of the scalar potential associated with some gauge singlet moduli fields. The low-energy Yukawa couplings are non-trivial homogeneous functions of the moduli and a geometric constraint between them plays, in a large class of models, a crucial role in generating hierarchies. Explicit examples are given for no-scale type supergravity models. The Yukawa couplings are dynamical variables at low energy, to be determined by a minimization process which amounts to fixing ratios of the moduli fields. The Minimal Supersymmetric Standard Model is studied and the constraints needed on the parameters in order to have a top quark much heavier than the other fermions are worked out. The bottom mass is explicitly computed and shown to be compatible with the experimental data for a large region of the parameter space. ((orig.))

  17. Nonuniversal gaugino masses from nonsinglet F-terms in nonminimal unified models

    International Nuclear Information System (INIS)

    Martin, Stephen P.

    2009-01-01

    In phenomenological studies of low-energy supersymmetry, running gaugino masses are often taken to be equal near the scale of apparent gauge coupling unification. However, many known mechanisms can avoid this universality, even in models with unified gauge interactions. One example is an F-term vacuum expectation value that is a singlet under the standard model gauge group but transforms nontrivially in the symmetric product of two adjoint representations of a group that contains the standard model gauge group. Here, I compute the ratios of gaugino masses that follow from F-terms in nonsinglet representations of SO(10) and E 6 and their subgroups, extending well-known results for SU(5). The SO(10) results correct some long-standing errors in the literature.

  18. Computing K and D meson masses with N{sub f}=2+1+1 twisted mass lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Baron, Remi [CEA, Centre de Saclay, 91 - Gif-sur-Yvette (France). IRFU/Service de Physique Nucleaire; Blossier, Benoit; Boucaud, Philippe [Paris XI Univ., 91 - Orsay (FR). Lab. de Physique Theorique] (and others)

    2010-05-15

    We discuss the computation of the mass of the K and D mesons within the framework of N{sub f}=2+1+1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and charm quark masses to their physical values. In particular, we address the problems related to the twisted mass flavor and parity symmetry breaking, which arise when considering a non-degenerate (c,s) doublet. We propose and verify the consistency of three methods to extract the K and D meson masses in this framework. (orig.)

  19. Two Quarantine Models on the Attack of Malicious Objects in Computer Network

    Directory of Open Access Journals (Sweden)

    Bimal Kumar Mishra

    2012-01-01

    Full Text Available SEIQR (Susceptible, Exposed, Infectious, Quarantined, and Recovered models for the transmission of malicious objects with simple mass action incidence and standard incidence rate in computer network are formulated. Threshold, equilibrium, and their stability are discussed for the simple mass action incidence and standard incidence rate. Global stability and asymptotic stability of endemic equilibrium for simple mass action incidence have been shown. With the help of Poincare Bendixson Property, asymptotic stability of endemic equilibrium for standard incidence rate has been shown. Numerical methods have been used to solve and simulate the system of differential equations. The effect of quarantine on recovered nodes is analyzed. We have also analyzed the behavior of the susceptible, exposed, infected, quarantine, and recovered nodes in the computer network.

  20. Critical assessment of nuclear mass models

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1992-01-01

    Some of the physical assumptions underlying various nuclear mass models are discussed. The ability of different mass models to predict new masses that were not taken into account when the models were formulated and their parameters determined is analyzed. The models are also compared with respect to their ability to describe nuclear-structure properties in general. The analysis suggests future directions for mass-model development

  1. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  2. Energy, mass, model-based displays, and memory recall

    International Nuclear Information System (INIS)

    Beltracchi, L.

    1989-01-01

    The operation of a pressurized water reactor in the context of the conservation laws for energy and mass is discussed. These conservation laws are the basis of the Rankine heat engine cycle. Computer graphic implementation of the heat engine cycle, in terms of temperature-entropy coordinates for water, serves as a model-based display of the plant process. A human user of this display, trained in first principles of the process, may exercise a monitoring strategy based on the conservation laws

  3. The exact mass-gap of the supersymmetric O(N) sigma model

    CERN Document Server

    Evans, J M; Evans, Jonathan M; Hollowood, Timothy J

    1995-01-01

    A formula for the mass-gap of the supersymmetric O(N) sigma model (N>4) in two dimensions is derived: m/\\Lambda_{\\overline{\\rm MS}}=2^{2\\Delta}\\sin(\\pi\\Delta)/(\\pi\\Delta), where \\Delta=1/(N-2) and m is the mass of the fundamental vector particle in the theory. This result is obtained by comparing two expressions for the free-energy density in the presence of a coupling to a conserved charge; one expression is computed from the exact S-matrix of Shankar and Witten via the the thermodynamic Bethe ansatz and the other is computed using conventional perturbation theory. These calculations provide a stringent test of the S-matrix, showing that it correctly reproduces the universal part of the beta-function and resolving the problem of CDD ambiguities.

  4. Modelling Mass Casualty Decontamination Systems Informed by Field Exercise Data

    Directory of Open Access Journals (Sweden)

    Richard Amlôt

    2012-10-01

    Full Text Available In the event of a large-scale chemical release in the UK decontamination of ambulant casualties would be undertaken by the Fire and Rescue Service (FRS. The aim of this study was to track the movement of volunteer casualties at two mass decontamination field exercises using passive Radio Frequency Identification tags and detection mats that were placed at pre-defined locations. The exercise data were then used to inform a computer model of the FRS component of the mass decontamination process. Having removed all clothing and having showered, the re-dressing (termed re-robing of casualties was found to be a bottleneck in the mass decontamination process during both exercises. Computer simulations showed that increasing the capacity of each lane of the re-robe section to accommodate 10 rather than five casualties would be optimal in general, but that a capacity of 15 might be required to accommodate vulnerable individuals. If the duration of the shower was decreased from three minutes to one minute then a per lane re-robe capacity of 20 might be necessary to maximise the throughput of casualties. In conclusion, one practical enhancement to the FRS response may be to provide at least one additional re-robe section per mass decontamination unit.

  5. Phantoms and computational models in therapy, diagnosis and protection

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    The development of realistic body phantoms and computational models is strongly dependent on the availability of comprehensive human anatomical data. This information is often missing, incomplete or not easily available. Therefore, emphasis is given in the Report to organ and body masses and geometries. The influence of age, sex and ethnic origins in human anatomy is considered. Suggestions are given on how suitable anatomical data can be either extracted from published information or obtained from measurements on the local population. Existing types of phantoms and computational models used with photons, electrons, protons and neutrons are reviewed in this Report. Specifications of those considered important to the maintenance and development of reliable radiation dosimetry and measurement are given. The information provided includes a description of the phantom or model, together with diagrams or photographs and physical dimensions. The tissues within body sections are identified and the tissue substitutes used or recommended are listed. The uses of the phantom or model in radiation dosimetry and measurement are outlined. The Report deals predominantly with phantom and computational models representing the human anatomy, with a short Section devoted to animal phantoms in radiobiology

  6. Finite element model for heat conduction in jointed rock masses

    International Nuclear Information System (INIS)

    Gartling, D.K.; Thomas, R.K.

    1981-01-01

    A computatonal procedure for simulating heat conduction in a fractured rock mass is proposed and illustrated in the present paper. The method makes use of a simple local model for conduction in the vicinity of a single open fracture. The distributions of fractures and fracture properties within the finite element model are based on a statistical representation of geologic field data. Fracture behavior is included in the finite element computation by locating local, discrete fractures at the element integration points

  7. Introduction to numerical modeling of thermohydrologic flow in fractured rock masses

    International Nuclear Information System (INIS)

    Wang, J.S.Y.

    1980-01-01

    More attention is being given to the possibility of nuclear waste isolation in hard rock formations. The waste will generate heat which raises the temperature of the surrounding fractured rock masses and induces buoyancy flow and pressure change in the fluid. These effects introduce the potential hazard of radionuclides being carried to the biosphere, and affect the structure of a repository by stress changes in the rock formation. The thermohydrological and thermomechanical responses are determined by the fractures as well as the intact rock blocks. The capability of modeling fractured rock masses is essential to site characterization and repository evaluation. The fractures can be modeled either as a discrete system, taking into account the detailed fracture distributions, or as a continuum representing the spatial average of the fractures. A numerical model is characterized by the governing equations, the numerical methods, the computer codes, the validations, and the applications. These elements of the thermohydrological models are discussed. Along with the general review, some of the considerations in modeling fractures are also discussed. Some remarks on the research needs in modeling fractured rock mass conclude the paper

  8. Computer-aided classification of breast masses using contrast-enhanced digital mammograms

    Science.gov (United States)

    Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin

    2018-02-01

    By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (pbreast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.

  9. Introduction to computer control and future aspects in thermal ionisation mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hagemann, R. [CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1978-12-15

    The author considers the computer control of the measurement program which is already available in modern mass spectrometers. Future areas for computer control are considered e.g. the heating program, ion optics and focusing, and sample changer control.

  10. Research on heat and mass transfer model for passive containment cooling system

    International Nuclear Information System (INIS)

    Jiang Xiaowei; Yu Hongxing; Sun Yufa; Huang Daishun

    2013-01-01

    Different with the traditional dry style containment design without external cooling, the PCCS design increased the temperature difference between the wall and the containment atmosphere significantly, and also the absolute temperature of the containment surfaces will be lower, affecting properties relevant in the condensation process. A research on the heat and mass transfer model has been done in this paper, especially the improvement on the condensation and evaporation model in the presence of noncondensable gases. Firstly, the Peterson's diffusion layer model was proved to equivalent to the stagnant film model adopted by CONTAIN code using the Clausius-Clapeyron equation, then a factor which can be used to stagnant film model was derived from the comparison between the Y.Liao's generalized diffusion layer model and the Peterson's diffusion layer model. Finally, the model in CONTAIN code used to compute the condensation and evaporation mass flux was modified using the factor, and the Wisconsin condensation tests and Westinghouse film evaporation on heated plate tests were simulated which had proved the improved model can predict more closer value of the heat and mass transfer coefficient to experimental value than original model. (authors)

  11. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  12. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    Science.gov (United States)

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  13. Introduction to computer control and future aspects in thermal ionisation mass spectrometry

    International Nuclear Information System (INIS)

    Hagemann, R.

    The author considers the computer control of the measurement program which is already available in modern mass spectrometers. Future areas for computer control are considered e.g. the heating program, ion optics and focusing, and sample changer control. (Auth.)

  14. The exact mass-gap of the supersymmetric CP$^{N-1}$ sigma model

    CERN Document Server

    Evans, J M; Evans, Jonathan M; Hollowood, Timothy J

    1995-01-01

    A formula for the mass-gap of the supersymmetric \\CP^{n-1} sigma model (n > 1) in two dimensions is derived: m/\\Lambda_{\\overline{\\rm MS}}=\\sin(\\pi\\Delta)/(\\pi\\Delta) where \\Delta=1/n and m is the mass of the fundamental particle multiplet. This result is obtained by comparing two expressions for the free-energy density in the presence of a coupling to a conserved charge; one expression is computed from the exact S-matrix of K\\"oberle and Kurak via the thermodynamic Bethe ansatz and the other is computed using conventional perturbation theory. These calculations provide a stringent test of the S-matrix, showing that it correctly reproduces the universal part of the beta-function and resolving the problem of CDD ambiguities.

  15. The mass spectrum of the Schwinger model with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Cyprus Univ., Nicosia (Cyprus). Dept. of Physics

    2013-07-15

    We show the feasibility of tensor network solutions for lattice gauge theories in Hamiltonian formulation by applying matrix product states algorithms to the Schwinger model with zero and non-vanishing fermion mass. We introduce new techniques to compute excitations in a system with open boundary conditions, and to identify the states corresponding to low momentum and different quantum numbers in the continuum. For the ground state and both the vector and scalar mass gaps in the massive case, the MPS technique attains precisions comparable to the best results available from other techniques.

  16. Reconsideration of mass-distribution models

    Directory of Open Access Journals (Sweden)

    Ninković S.

    2014-01-01

    Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems

  17. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  18. Research of compression strength of fissured rock mass

    Directory of Open Access Journals (Sweden)

    А. Г. Протосеня

    2017-03-01

    Full Text Available The article examines a method of forecasting strength properties and their scale effect in fissured rock mass using computational modelling with final elements method in ABAQUS software. It shows advantages of this approach for solving tasks of determining mechanical properties of fissured rock mass, main stages of creating computational geomechanic model of rock mass and conducting a numerical experiment. The article presents connections between deformation during loading of numerical model, inclination angle of main fracture system from uniaxial and biaxial compression strength value, size of the sample of fissured rock mass and biaxial compression strength value under conditions of apatite-nepheline rock deposit at Plateau Rasvumchorr OAO «Apatit» in Kirovsky region of Murmanskaya oblast. We have conducted computational modelling of rock mass blocks testing in discontinuities based on real experiment using non-linear shear strength criterion of Barton – Bandis and compared results of computational experiments with data from field studies and laboratory tests. The calculation results have a high-quality match to laboratory results when testing fissured rock mass samples.

  19. ELEMENT MASSES IN THE CRAB NEBULA

    Energy Technology Data Exchange (ETDEWEB)

    Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.; Vanderveer, Steven J.; MacAlpine, Gordon M. [Department of Physics and Astronomy, Trinity University, San Antonio, TX 78212 (United States)

    2016-10-01

    Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii]  λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.

  20. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  1. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    Science.gov (United States)

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed

  2. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  3. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    Science.gov (United States)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  4. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  5. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    International Nuclear Information System (INIS)

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  6. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  7. Vectors into the Future of Mass and Interpersonal Communication Research: Big Data, Social Media, and Computational Social Science.

    Science.gov (United States)

    Cappella, Joseph N

    2017-10-01

    Simultaneous developments in big data, social media, and computational social science have set the stage for how we think about and understand interpersonal and mass communication. This article explores some of the ways that these developments generate 4 hypothetical "vectors" - directions - into the next generation of communication research. These vectors include developments in network analysis, modeling interpersonal and social influence, recommendation systems, and the blurring of distinctions between interpersonal and mass audiences through narrowcasting and broadcasting. The methods and research in these arenas are occurring in areas outside the typical boundaries of the communication discipline but engage classic, substantive questions in mass and interpersonal communication.

  8. Fuzzy cluster quantitative computations of component mass transfer in rocks or minerals

    International Nuclear Information System (INIS)

    Liu Dezheng

    2000-01-01

    The author advances a new component mass transfer quantitative computation method on the basis of closure nature of mass percentage of components in rocks or minerals. Using fuzzy dynamic cluster analysis, and calculating restore closure difference, and determining type of difference, and assisted by relevant diagnostic parameters, the method gradually screens out the true constant component. Then, true mass percentage and mass transfer quantity of components of metabolic rocks or minerals are calculated by applying the true constant component fixed coefficient. This method is called true constant component fixed method (TCF method)

  9. A computation method for mass flowrate predictions in critical flows of initially subcooled liquid in long channels

    International Nuclear Information System (INIS)

    Celata, G.P.; D'Annibale, F.; Farello, G.E.

    1985-01-01

    It is suggested a fast and accurate computation method for the prediction of mass flowrate in critical flows initially subcooled liquid from ''long'' discharge channels (high LID values). Starting from a previous very simple correlation proposed by the authors, further improvements in the model enable to widen the method reliability up to initial saturation conditions. A comparison of computed values with 145 experimental data regarding several investigations carried out at the Heat Transfer Laboratory (TERM/ISP, ENEA Casaccia) shows an excellent agreement. The computed data shifting from experimental ones is within ±10% for almost all data, with a slight increase towards low inlet subcoolings. The average error, for all the considered data, is 4,6%

  10. Computational electrochemo-fluid dynamics modeling in a uranium electrowinning cell

    International Nuclear Information System (INIS)

    Kim, K.R.; Choi, S.Y.; Kim, S.H.; Shim, J.B.; Paek, S.; Kim, I.T.

    2014-01-01

    A computational electrochemo-fluid dynamics model has been developed to describe the electrowinning behavior in an electrolyte stream through a planar electrode cell system. Electrode reaction of the uranium electrowinning process from a molten-salt electrolyte stream was modeled to illustrate the details of the flow-assisted mass transport of ions to the cathode. This modeling approach makes it possible to represent variations of the convective diffusion limited current density by taking into account the concentration profile at the electrode surface as a function of the flow characteristics and applied current density in a commercially available computational fluid dynamics platform. It was possible to predict the conventional current-voltage relation in addition to details of electrolyte fluid dynamics and electrochemical variables, such as the flow field, species concentrations, potential, and current distributions throughout the galvanostatic electrolysis cell. (author)

  11. A Computational Drug Metabolite Detection Using the Stable Isotopic Mass-Shift Filtering with High Resolution Mass Spectrometry in Pioglitazone and Flurbiprofen

    Directory of Open Access Journals (Sweden)

    Yohei Miyamoto

    2013-09-01

    Full Text Available The identification of metabolites in drug discovery is important. At present, radioisotopes and mass spectrometry are both widely used. However, rapid and comprehensive identification is still laborious and difficult. In this study, we developed new analytical software and employed a stable isotope as a tool to identify drug metabolites using mass spectrometry. A deuterium-labeled compound and non-labeled compound were both metabolized in human liver microsomes and analyzed by liquid chromatography/time-of-flight mass spectrometry (LC-TOF-MS. We computationally aligned two different MS data sets and filtered ions having a specific mass-shift equal to masses of labeled isotopes between those data using our own software. For pioglitazone and flurbiprofen, eight and four metabolites, respectively, were identified with calculations of mass and formulas and chemical structural fragmentation analysis. With high resolution MS, the approach became more accurate. The approach detected two unexpected metabolites in pioglitazone, i.e., the hydroxypropanamide form and the aldehyde hydrolysis form, which other approaches such as metabolite-biotransformation list matching and mass defect filtering could not detect. We demonstrated that the approach using computational alignment and stable isotopic mass-shift filtering has the ability to identify drug metabolites and is useful in drug discovery.

  12. Computational model of collagen turnover in carotid arteries during hypertension.

    Science.gov (United States)

    Sáez, P; Peña, E; Tarbell, J M; Martínez, M A

    2015-02-01

    It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  14. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  15. State of the art of numerical modeling of thermohydrologic flow in fractured rock mass

    International Nuclear Information System (INIS)

    Wang, J.S.Y.; Tsang, C.F.; Sterbentz, R.A.

    1983-01-01

    The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses is reviewed and a comparative study is made of several models which have been developed in nuclear waste isolation, geothermal energy, ground-water hydrology, petroleum engineering, and other geologic fields. The general review is followed by separate summaries of the main characteristics of the governing equations, numerical solutions, computer codes, validations, and applications for each model

  16. Advanced computational modeling for in vitro nanomaterial dosimetry.

    Science.gov (United States)

    DeLoid, Glen M; Cohen, Joel M; Pyrgiotakis, Georgios; Pirela, Sandra V; Pal, Anoop; Liu, Jiying; Srebric, Jelena; Demokritou, Philip

    2015-10-24

    Accurate and meaningful dose metrics are a basic requirement for in vitro screening to assess potential health risks of engineered nanomaterials (ENMs). Correctly and consistently quantifying what cells "see," during an in vitro exposure requires standardized preparation of stable ENM suspensions, accurate characterizatoin of agglomerate sizes and effective densities, and predictive modeling of mass transport. Earlier transport models provided a marked improvement over administered concentration or total mass, but included assumptions that could produce sizable inaccuracies, most notably that all particles at the bottom of the well are adsorbed or taken up by cells, which would drive transport downward, resulting in overestimation of deposition. Here we present development, validation and results of two robust computational transport models. Both three-dimensional computational fluid dynamics (CFD) and a newly-developed one-dimensional Distorted Grid (DG) model were used to estimate delivered dose metrics for industry-relevant metal oxide ENMs suspended in culture media. Both models allow simultaneous modeling of full size distributions for polydisperse ENM suspensions, and provide deposition metrics as well as concentration metrics over the extent of the well. The DG model also emulates the biokinetics at the particle-cell interface using a Langmuir isotherm, governed by a user-defined dissociation constant, K(D), and allows modeling of ENM dissolution over time. Dose metrics predicted by the two models were in remarkably close agreement. The DG model was also validated by quantitative analysis of flash-frozen, cryosectioned columns of ENM suspensions. Results of simulations based on agglomerate size distributions differed substantially from those obtained using mean sizes. The effect of cellular adsorption on delivered dose was negligible for K(D) values consistent with non-specific binding (> 1 nM), whereas smaller values (≤ 1 nM) typical of specific high

  17. New limits on the mass of neutral Higgses in general models

    International Nuclear Information System (INIS)

    Comelli, D.

    1996-07-01

    In general electroweak models with weakly coupled (and otherwise arbitrary) Higgs sector there always exists in the spectrum a scalar state with mass controlled by the electroweak scale. A new and simple recipe to compute an analytical tree-level upper bound on the mass of this light scalar is given. We compare this new bound with similar ones existing in the literature and show how to extract extra information on heavier neutral scalars in the spectrum from the interplay of independent bounds. Production of these states at future colliders is addressed and the implications for the decoupling limit in which only one Higgs is expected to remain light are discussed. (orig.)

  18. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  19. Relating masses and mixing angles. A model-independent model

    Energy Technology Data Exchange (ETDEWEB)

    Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)

    2016-07-01

    In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.

  20. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  1. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  2. Macroparticle model for longitudinal emittance growth caused by negative mass instability in a proton synchrotron

    CERN Document Server

    MacLachlan, J A

    2004-01-01

    Both theoretical models and beam observations of negative mass instability fall short of a full description of the dynamics and the dynamical effects. Clarification by numerical modeling is now practicable because of the recent proliferation of so-called computing farms. The results of modeling reported in this paper disagree with some predictions based on a long-standing linear perturbation calculation. Validity checks on the macroparticle model are described.

  3. Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors

    Science.gov (United States)

    Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.

    2008-05-01

    This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.

  4. The Teaching of Mass Communication Through the use of Computer ...

    African Journals Online (AJOL)

    Mass communication as a programme in education is an important subject in the training of students. Here, we determined the effects of improving the teaching of the subject in a tertiary institution like Cross River University of Technology through the use of computer assisted picture presentation. The study was ...

  5. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    DEFF Research Database (Denmark)

    Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local f...... in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo....

  6. Modeling and Simulation of Variable Mass, Flexible Structures

    Science.gov (United States)

    Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.

    2009-01-01

    distribution of mass in the fuel tank or Solid Rocket Booster (SRB) case for various propellant levels. Based on the mass consumed by the liquid engine or SRB, the appropriate propellant model is coupled with the dry structure model for the stage. Then using vehicle configuration data, the integrated vehicle model is assembled and operated on by the constant system shape functions. The system mode shapes and frequencies can then be computed from the resulting generalized mass and stiffness matrices for that mass configuration. The rigid body mass properties of the vehicle are derived from the integrated vehicle model. The coupling terms between the vehicle rigid body motion and elastic deformation are also updated from the constant system shape functions and the integrated vehicle model. This approach was first used to analyze variable mass spinning beams and then prototyped into a generic dynamics simulation engine. The resulting code was tested against Crew Launch Vehicle (CLV-)class problems worked in the TREETOPS simulation package and by Wilson [2]. The Ares I System Integration Laboratory (SIL) is currently being developed at the Marshall Space Flight Center (MSFC) to test vehicle avionics hardware and software in a hardware-in-the-loop (HWIL) environment and certify that the integrated system is prepared for flight. The Ares I SIL utilizes the Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) tool to simulate the launch vehicle and stimulate avionics hardware. Due to the presence of vehicle control system filters and the thrust oscillation suppression system, which are tuned to the structural characteristics of the vehicle, ARTEMIS must incorporate accurate structural models of the Ares I launch vehicle. The ARTEMIS core dynamics simulation models the highly coupled nature of the vehicle flexible body dynamics, propellant slosh, and vehicle nozzle inertia effects combined with mass and flexible body properties that vary significant with time

  7. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  8. A Literature Review of Computers and Pedagogy for Journalism and Mass Communication Education.

    Science.gov (United States)

    Hoag, Anne M.; Bhattacharya, Sandhya; Helsel, Jeffrey; Hu, Yifeng; Lee, Sangki; Kim, Jinhee; Kim, Sunghae; Michael, Patty Wharton; Park, Chongdae; Sager, Sheila S.; Seo, Sangho; Stark, Craig; Yeo, Benjamin

    2003-01-01

    Notes that a growing body of scholarship on computers and pedagogy encompasses a broad range of topics. Focuses on research judged to have implications within journalism and mass communication education. Discusses literature which considers computer use in course design and teaching, student attributes in a digital learning context, the role of…

  9. Improved metastability bounds on the standard model Higgs mass

    CERN Document Server

    Espinosa, J R; Espinosa, J R; Quiros, M

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the Standard Model at finite (and zero) temperature can have a deep and unphysical stable minimum \\langle \\phi(T)\\rangle at values of the field much larger than G_F^{-1/2}. We have computed absolute lower bounds on M_H, as a function of M_t, imposing the condition of no decay by thermal fluctuations, or quantum tunnelling, to the stable minimum. Our effective potential at zero temperature includes all next-to-leading logarithmic corrections (making it extremely scale-independent), and we have used pole masses for the Higgs-boson and top-quark. Thermal corrections to the effective potential include plasma effects by one-loop ring resummation of Debye masses. All calculations, including the effective potential and the bubble nucleation rate, are performed numerically and so the results do not rely on any kind of analytical approximation. Easy-to-use fits are provided for the benefit of the reader. Conclusions on the possi...

  10. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  11. The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses

    International Nuclear Information System (INIS)

    Wang, J.S.Y.; Sterbentz, R.A.; Tsang, C.F.

    1982-01-01

    The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses is reviewed and a comparative study is made of several models which have been developed in nuclear waste isolation, geothermal energy, ground water hydrology, petroleum engineering, and other geologic fields. The general review is followed by individual summaries of each model and the main characteristics of its governing equations, numerical solutions, computer codes, validations, and applications

  12. Development of a totally computer-controlled triple quadrupole mass spectrometer system

    International Nuclear Information System (INIS)

    Wong, C.M.; Crawford, R.W.; Barton, V.C.; Brand, H.R.; Neufeld, K.W.; Bowman, J.E.

    1983-01-01

    A totally computer-controlled triple quadrupole mass spectrometer (TQMS) is described. It has a number of unique features not available on current commercial instruments, including: complete computer control of source and all ion axial potentials; use of dual computers for data acquisition and data processing; and capability for self-adaptive control of experiments. Furthermore, it has been possible to produce this instrument at a cost significantly below that of commercial instruments. This triple quadrupole mass spectrometer has been constructed using components commercially available from several different manufacturers. The source is a standard Hewlett-Packard 5985B GC/MS source. The two quadrupole analyzers and the quadrupole CAD region contain Balzers QMA 150 rods with Balzers QMG 511 rf controllers for the analyzers and a Balzers QHS-511 controller for the CAD region. The pulsed-positive-ion-negative-ion-chemical ionization (PPINICI) detector is made by Finnigan Corporation. The mechanical and electronics design were developed at LLNL for linking these diverse elements into a functional TQMS as described. The computer design for total control of the system is unique in that two separate LSI-11/23 minicomputers and assorted I/O peripherals and interfaces from several manufacturers are used. The evolution of this design concept from totally computer-controlled instrumentation into future self-adaptive or ''expert'' systems for instrumental analysis is described. Operational characteristics of the instrument and initial results from experiments involving the analysis of the high explosive HMX (1,3,5,7-Tetranitro-1,3,5,7-Tetrazacyclooctane) are presented

  13. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  14. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  15. A two-dimensional, two-phase mass transport model for liquid-feed DMFCs

    International Nuclear Information System (INIS)

    Yang, W.W.; Zhao, T.S.

    2007-01-01

    A two-dimensional, isothermal two-phase mass transport model for a liquid-feed direct methanol fuel cell (DMFC) is presented in this paper. The two-phase mass transport in the anode and cathode porous regions is formulated based on the classical multiphase flow in porous media without invoking the assumption of constant gas pressure in the unsaturated porous medium flow theory. The two-phase flow behavior in the anode flow channel is modeled by utilizing the drift-flux model, while in the cathode flow channel the homogeneous mist-flow model is used. In addition, a micro-agglomerate model is developed for the cathode catalyst layer. The model also accounts for the effects of both methanol and water crossover through the membrane. The comprehensive model formed by integrating those in the different regions is solved numerically using a home-written computer code and validated against the experimental data in the literature. The model is then used to investigate the effects of various operating and structural parameters, such as methanol concentration, anode flow rate, porosities of both anode and cathode electrodes, the rate of methanol crossover, and the agglomerate size, on cell performance

  16. Development of a locally mass flux conservative computer code for calculating 3-D viscous flow in turbomachines

    Science.gov (United States)

    Walitt, L.

    1982-01-01

    The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.

  17. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  18. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    Smith, D.H.

    1982-08-01

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  19. Masses in the Weinberg-Salam model

    International Nuclear Information System (INIS)

    Flores, F.A.

    1984-01-01

    This thesis is a detailed discussion of the currently existing limits on the masses of Higgs scalars and fermions in the Weinberg-Salam model. The spontaneous breaking of the gauge symmetry of the model generates arbitrary masses for Higgs scalars and fermions, which for the known fermions have to be set to their experimentally known values. In this thesis, the authors discuss in detail both the theoretical and experimental constraints on these otherwise arbitrary masses

  20. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  1. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  2. Mathematical modeling and computational prediction of cancer drug resistance.

    Science.gov (United States)

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of

  3. MININR: a geochemical computer program for inclusion in water flow models - an application study

    Energy Technology Data Exchange (ETDEWEB)

    Felmy, A.R.; Reisenauer, A.E.; Zachara, J.M.; Gee, G.W.

    1984-02-01

    MININR is a reduced form of the computer program MINTEQ which calculates equilibrium precipitation/dissolution of solid phases, aqueous speciation, adsorption, and gas phase equilibrium. The user-oriented features in MINTEQ were removed to reduce the size and increase the computational speed. MININR closely resembles the MINEQL computer program developed by Westall (1976). The main differences between MININR and MINEQL involve modifications to accept an initial starting mass of solid and necessary changes for linking with a water flow model. MININR in combination with a simple water flow model which considers only dilution was applied to a laboratory column packed with retorted oil shale and percolated with distilled water. Experimental and preliminary model simulation results are presented for the constituents K/sup +/, Na/sup +/, SO/sub 4//sup 2 -/, Mg/sup 2 +/, Ca/sup 2 +/, CO/sub 3//sup 2 -/ and pH.

  4. Higgs mass determination in supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Javier Pardo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151, Trieste (Italy); SISSA International School for Advanced Studies and INFN Trieste, Via Bonomea 265, 34136, Trieste (Italy); Villadoro, Giovanni [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151, Trieste (Italy)

    2015-07-29

    We present the state-of-the-art of the effective field theory computation of the MSSM Higgs mass, improving the existing ones by including extra threshold corrections. We show that, with this approach, the theoretical uncertainty is within 1 GeV in most of the relevant parameter space. We confirm the smaller value of the Higgs mass found in the EFT computations, which implies a slightly heavier SUSY scale. We study the large tan β region, finding that sbottom thresholds might relax the upper bound on the scale of SUSY. We present SUSYHD, a fast computer code that computes the Higgs mass and its uncertainty for any SUSY scale, from the TeV to the Planck scale, even in Split SUSY, both in the (DR)-bar and in the on-shell schemes. Finally, we apply our results to derive bounds on some well motivated SUSY models, in particular we show how the value of the Higgs mass allows to determine the complete spectrum in minimal gauge mediation.

  5. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  6. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  7. Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material

    Science.gov (United States)

    Upadhyay, Ashwani; Chandramohan, V. P.

    2018-04-01

    A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.

  8. Neutrino Mass and Flavour Models

    International Nuclear Information System (INIS)

    King, Stephen F.

    2010-01-01

    We survey some of the recent promising developments in the search for the theory behind neutrino mass and tri-bimaximal mixing, and indeed all fermion masses and mixing. We focus in particular on models with discrete family symmetry and unification, and show how such models can also solve the SUSY flavour and CP problems. We also discuss the theoretical implications of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.

  9. Renormalization and radiative corrections to masses in a general Yukawa model

    Science.gov (United States)

    Fox, M.; Grimus, W.; Löschner, M.

    2018-01-01

    We consider a model with arbitrary numbers of Majorana fermion fields and real scalar fields φa, general Yukawa couplings and a ℤ4 symmetry that forbids linear and trilinear terms in the scalar potential. Moreover, fermions become massive only after spontaneous symmetry breaking of the ℤ4 symmetry by vacuum expectation values (VEVs) of the φa. Introducing the shifted fields ha whose VEVs vanish, MS¯ renormalization of the parameters of the unbroken theory suffices to make the theory finite. However, in this way, beyond tree level it is necessary to perform finite shifts of the tree-level VEVs, induced by the finite parts of the tadpole diagrams, in order to ensure vanishing one-point functions of the ha. Moreover, adapting the renormalization scheme to a situation with many scalars and VEVs, we consider the physical fermion and scalar masses as derived quantities, i.e. as functions of the coupling constants and VEVs. Consequently, the masses have to be computed order by order in a perturbative expansion. In this scheme, we compute the self-energies of fermions and bosons and show how to obtain the respective one-loop contributions to the tree-level masses. Furthermore, we discuss the modification of our results in the case of Dirac fermions and investigate, by way of an example, the effects of a flavor symmetry group.

  10. Porous media fluid flow, heat, and mass transport model with rock stress coupling

    International Nuclear Information System (INIS)

    Runchal, A.K.

    1980-01-01

    This paper describes the physical and mathematical basis of a general purpose porous media flow model, GWTHERM. The mathematical basis of the model is obtained from the coupled set of the classical governing equations for the mass, momentum and energy balance. These equations are embodied in a computational model which is then coupled externally to a linearly elastic rock-stress model. This coupling is rather exploratory and based upon empirical correlations. The coupled model is able to take account of time-dependent, inhomogeneous and anisotropic features of the hydrogeologic, thermal and transport phenomena. A number of applications of the model have been made. Illustrations from the application of the model to nuclear waste repositories are included

  11. In-silico oncology: an approximate model of brain tumor mass effect based on directly manipulated free form deformation

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Stefan; Mang, Andreas; Toma, Alina; Buzug, Thorsten M. [University of Luebeck (Germany). Institute of Medical Engineering

    2010-12-15

    The present work introduces a novel method for approximating mass effect of primary brain tumors. The spatio-temporal dynamics of cancerous cells are modeled by means of a deterministic reaction-diffusion equation. Diffusion tensor information obtained from a probabilistic diffusion tensor imaging atlas is incorporated into the model to simulate anisotropic diffusion of cancerous cells. To account for the expansive nature of the tumor, the computed net cell density of malignant cells is linked to a parametric deformation model. This mass effect model is based on the so-called directly manipulated free form deformation. Spatial correspondence between two successive simulation steps is established by tracking landmarks, which are attached to the boundary of the gross tumor volume. The movement of these landmarks is used to compute the new configuration of the control points and, hence, determines the resulting deformation. To prevent a deformation of rigid structures (i.e. the skull), fixed shielding landmarks are introduced. In a refinement step, an adaptive landmark scheme ensures a dense sampling of the tumor isosurface, which in turn allows for an appropriate representation of the tumor shape. The influence of different parameters on the model is demonstrated by a set of simulations. Additionally, simulation results are qualitatively compared to an exemplary set of clinical magnetic resonance images of patients diagnosed with high-grade glioma. Careful visual inspection of the results demonstrates the potential of the implemented model and provides first evidence that the computed approximation of tumor mass effect is sensible. The shape of diffusive brain tumors (glioblastoma multiforme) can be recovered and approximately matches the observations in real clinical data. (orig.)

  12. On a family of (1+1)-dimensional scalar field theory models: Kinks, stability, one-loop mass shifts

    Energy Technology Data Exchange (ETDEWEB)

    Alonso-Izquierdo, A., E-mail: alonsoiz@usal.es [Departamento de Matematica Aplicada and IUFFyM, Universidad de Salamanca (Spain); Mateos Guilarte, J. [Departamento de Fisica Fundamental and IUFFyM, Universidad de Salamanca (Spain)

    2012-09-15

    In this paper we construct a one-parametric family of (1+1)-dimensional one-component scalar field theory models supporting kinks. Inspired by the sine-Gordon and {phi}{sup 4} models, we look at all possible extensions such that the kink second-order fluctuation operators are Schroedinger differential operators with Poeschl-Teller potential wells. In this situation, the associated spectral problem is solvable and therefore we shall succeed in analyzing the kink stability completely and in computing the one-loop quantum correction to the kink mass exactly. When the parameter is a natural number, the family becomes the hierarchy for which the potential wells are reflectionless, the two first levels of the hierarchy being the sine-Gordon and {phi}{sup 4} models. - Highlights: Black-Right-Pointing-Pointer We construct a family of scalar field theory models supporting kinks. Black-Right-Pointing-Pointer The second-order kink fluctuation operators involve Poeschl-Teller potential wells. Black-Right-Pointing-Pointer We compute the one-loop quantum correction to the kink mass with different methods.

  13. THE MASS DISTRIBUTION OF COMPANIONS TO LOW-MASS WHITE DWARFS

    International Nuclear Information System (INIS)

    Andrews, Jeff J.; Price-Whelan, Adrian M.; Agüeros, Marcel A.

    2014-01-01

    Measuring the masses of companions to single-line spectroscopic binary stars is (in general) not possible because of the unknown orbital plane inclination. Even when the mass of the visible star can be measured, only a lower limit can be placed on the mass of the unseen companion. However, since these inclination angles should be isotropically distributed, for a large enough, unbiased sample, the companion mass distribution can be deconvolved from the distribution of observables. In this work, we construct a hierarchical probabilistic model to infer properties of unseen companion stars given observations of the orbital period and projected radial velocity of the primary star. We apply this model to three mock samples of low-mass white dwarfs (LMWDs; M ≲ 0.45 M ☉ ) and a sample of post-common-envelope binaries. We use a mixture of two Gaussians to model the WD and neutron star (NS) companion mass distributions. Our model successfully recovers the initial parameters of these test data sets. We then apply our model to 55 WDs in the extremely low-mass (ELM) WD Survey. Our maximum a posteriori model for the WD companion population has a mean mass μ WD = 0.74 M ☉ , with a standard deviation σ WD = 0.24 M ☉ . Our model constrains the NS companion fraction f NS to be <16% at 68% confidence. We make samples from the posterior distribution publicly available so that future observational efforts may compute the NS probability for newly discovered LMWDs

  14. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  15. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  16. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  17. Double beta decay and neutrino mass models

    Energy Technology Data Exchange (ETDEWEB)

    Helo, J.C. [Universidad Técnica Federico Santa María, Centro-Científico-Tecnológico de Valparaíso, Casilla 110-V, Valparaíso (Chile); Hirsch, M. [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Ota, T. [Department of Physics, Saitama University, Shimo-Okubo 255, 338-8570 Saitama-Sakura (Japan); Santos, F.A. Pereira dos [Departamento de Física, Pontifícia Universidade Católica do Rio de Janeiro,Rua Marquês de São Vicente 225, 22451-900 Gávea, Rio de Janeiro (Brazil)

    2015-05-19

    Neutrinoless double beta decay allows to constrain lepton number violating extensions of the standard model. If neutrinos are Majorana particles, the mass mechanism will always contribute to the decay rate, however, it is not a priori guaranteed to be the dominant contribution in all models. Here, we discuss whether the mass mechanism dominates or not from the theory point of view. We classify all possible (scalar-mediated) short-range contributions to the decay rate according to the loop level, at which the corresponding models will generate Majorana neutrino masses, and discuss the expected relative size of the different contributions to the decay rate in each class. Our discussion is general for models based on the SM group but does not cover models with an extended gauge. We also work out the phenomenology of one concrete 2-loop model in which both, mass mechanism and short-range diagram, might lead to competitive contributions, in some detail.

  18. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  19. The 1992 FRDM mass model and unstable nuclei

    International Nuclear Information System (INIS)

    Moeller, P.

    1994-01-01

    We discuss the reliability of a recent global nuclear-structure calculation in regions far from β stability. We focus on the results for nuclear masses, but also mention other results obtained in the nuclear-structure calculation, for example ground-state spins. We discuss what should be some minimal requirements of a nuclear mass model and study how the macroscopic-microscopic method and other nuclear mass models fullfil such basic requirements. We study in particular the reliability of nuclear mass models in regions of nuclei that were not considered in the determination of the model parameters

  20. ICADx: interpretable computer aided diagnosis of breast masses

    Science.gov (United States)

    Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man

    2018-02-01

    In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.

  1. A High-Resolution Model of Water Mass Transformation and Transport in the Weddell Sea

    Science.gov (United States)

    Hazel, J.; Stewart, A.

    2016-12-01

    The ocean circulation around the Antarctic margins has a pronounced impact on the global ocean and climate system. One of these impacts includes closing the global meridional overturning circulation (MOC) via formation of dense Antarctic Bottom Water (AABW), which ventilates a large fraction of the subsurface ocean. AABW is also partially composed of modified Circumpolar Deep Water (CDW), a warm, mid-depth water mass whose transport towards the continent has the potential to induce rapid retreat of marine-terminating glaciers. Previous studies suggest that these water mass exchanges may be strongly influenced by high-frequency processes such as downslope gravity currents, tidal flows, and mesoscale/submesoscale eddy transport. However, evaluating the relative contributions of these processes to near-Antarctic water mass transports is hindered by the region's relatively small scales of motion and the logistical difficulties in taking measurements beneath sea ice.In this study we develop a regional model of the Weddell Sea, the largest established source of AABW. The model is forced by an annually-repeating atmospheric state constructed from the Antarctic Mesoscale Prediction System data and by annually-repeating lateral boundary conditions constructed from the Southern Ocean State Estimate. The model incorporates the full Filchner-Ronne cavity and simulates the thermodynamics and dynamics of sea ice. To analyze the role of high-frequency processes in the transport and transformation of water masses, we compute the model's overturning circulation, water mass transformations, and ice sheet basal melt at model horizontal grid resolutions ranging from 1/2 degree to 1/24 degree. We temporally decompose the high-resolution (1/24 degree) model circulation into components due to mean, eddy and tidal flows and discuss the geographical dependence of these processes and their impact on water mass transformation and transport.

  2. Diagnosis of masses presenting within the ventricles on computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kendall, B.; Reider-Grosswasser, I.; Valentine, A.

    1983-04-01

    The radiological and clinical features of 90 histologically verified intraventricular masses were reviewed. Computed tomography (CT) and plain X-rays were available in all and angiograms in over half the cases. The localisation, effects on the adjacent brain substance and the presence and degree of hydrocephalus was evident on CT. Two-thirds of colloid cysts presented as pathognomonic anterior third ventricular hyperdense masses and the other third were isodense; an alternative diagnosis should be considered for low density masses in this situation. Plexus papillomas and carcinomas mainly involved the trigone nd body of a lateal ventricle of young children and caused asymmetrical hydrocephalus; the third ventricle was occasionally affected also in children and the fourth ventricle more frequently and usually in adults. Two-thirds were hyper and one-third of mixed or lower density. The meningiomas were dense trigonal tumours of adults generally arising in the choroid plexus, but two tentorial meningiomas passed through the choroidal fissure and caused a predominantly intraventricular mass. Gliomas frequently thickened the septum and generally involved the frontal segments of the lateral ventricles. They may be supplied by perforating as well as by the choroidal arteries, which supply most other vascularised masses within the ventricles. Only 10% of our cases did not fall into one of the former categories; these included low density non-enhancing dermoid or epidermoid tumours and higher density enhancing metastatic or angiomatous masses.

  3. Computed tomography-guided percutaneous biopsy of pancreatic masses using pneumodissection

    Directory of Open Access Journals (Sweden)

    Chiang Jeng Tyng

    2013-06-01

    Full Text Available Objective To describe the technique of computed tomography-guided percutaneous biopsy of pancreatic tumors with pneumodissection. Materials and Methods In the period from June 2011 to May 2012, seven computed tomography-guided percutaneous biopsies of pancreatic tumors utilizing pneumodissection were performed in the authors' institution. All the procedures were performed with an automatic biopsy gun and coaxial system with Tru-core needles. The biopsy specimens were histologically assessed. Results In all the cases the pancreatic mass could not be directly approached by computed tomography without passing through major organs and structures. The injection of air allowed the displacement of adjacent structures and creation of a safe coaxial needle pathway toward the lesion. Biopsy was successfully performed in all the cases, yielding appropriate specimens for pathological analysis. Conclusion Pneumodissection is a safe, inexpensive and technically easy approach to perform percutaneous biopsy in selected cases where direct access to the pancreatic tumor is not feasible.

  4. Computed tomography-guided percutaneous biopsy of pancreatic masses using pneumodissection

    International Nuclear Information System (INIS)

    Tyng, Chiang Jeng; Bitencourt, Almir Galvao Vieira; Almeida, Maria Fernanda Arruda; Barbosa, Paula Nicole Vieira; Martins, Eduardo Bruno Lobato; Junior, Joao Paulo Kawaoka Matushita; Chojniak, Rubens; Coimbra, Felipe Jose Fernandez

    2013-01-01

    Objective: to describe the technique of computed tomography-guided percutaneous biopsy of pancreatic tumors with pneumodissection. Materials and methods: in the period from June 2011 to May 2012, seven computed tomography guided percutaneous biopsies of pancreatic tumors utilizing pneumodissection were performed in the authors' institution. All the procedures were performed with an automatic biopsy gun and coaxial system with Tru-core needles. The biopsy specimens were histologically assessed. Results: in all the cases the pancreatic mass could not be directly approached by computed tomography without passing through major organs and structures. The injection of air allowed the displacement of adjacent structures and creation of a safe coaxial needle pathway toward the lesion. Biopsy was successfully performed in all the cases, yielding appropriate specimens for pathological analysis. Conclusion: Pneumodissection is a safe, inexpensive and technically easy approach to perform percutaneous biopsy in selected cases where direct access to the pancreatic tumor is not feasible. (author)

  5. Hydrogen/deuterium exchange mass spectrometry and computational modeling reveal a discontinuous epitope of an antibody/TL1A Interaction.

    Science.gov (United States)

    Huang, Richard Y-C; Krystek, Stanley R; Felix, Nathan; Graziano, Robert F; Srinivasan, Mohan; Pashine, Achal; Chen, Guodong

    2018-01-01

    TL1A, a tumor necrosis factor-like cytokine, is a ligand for the death domain receptor DR3. TL1A, upon binding to DR3, can stimulate lymphocytes and trigger secretion of proinflammatory cytokines. Therefore, blockade of TL1A/DR3 interaction may be a potential therapeutic strategy for autoimmune and inflammatory diseases. Recently, the anti-TL1A monoclonal antibody 1 (mAb1) with a strong potency in blocking the TL1A/DR3 interaction was identified. Here, we report on the use of hydrogen/deuterium exchange mass spectrometry (HDX-MS) to obtain molecular-level details of mAb1's binding epitope on TL1A. HDX coupled with electron-transfer dissociation MS provided residue-level epitope information. The HDX dataset, in combination with solvent accessible surface area (SASA) analysis and computational modeling, revealed a discontinuous epitope within the predicted interaction interface of TL1A and DR3. The epitope regions span a distance within the approximate size of the variable domains of mAb1's heavy and light chains, indicating it uses a unique mechanism of action to block the TL1A/DR3 interaction.

  6. Bayesian model comparison using Gauss approximation on multicomponent mass spectra from CH4 plasma

    International Nuclear Information System (INIS)

    Kang, H.D.; Dose, V.

    2004-01-01

    We performed Bayesian model comparison on mass spectra from CH4 rf process plasmas to detect radicals produced in the plasma. The key ingredient for its implementation is the high-dimensional evidence integral. We apply Gauss approximation to evaluate the evidence. The results were compared with those calculated by the thermodynamic integration method using Markov Chain Monte Carlo technique. In spite of very large difference in the computation time between two methods a very good agreement was obtained. Alternatively, a Monte Carlo integration method based on the approximated Gaussian posterior density is presented. Its applicability to the problem of mass spectrometry is discussed

  7. Multicomponent mass transport model: theory and numerical implementation (discrete-parcel-random-walk version)

    International Nuclear Information System (INIS)

    Ahlstrom, S.W.; Foote, H.P.; Arnett, R.C.; Cole, C.R.; Serne, R.J.

    1977-05-01

    The Multicomponent Mass Transfer (MMT) Model is a generic computer code, currently in its third generation, that was developed to predict the movement of radiocontaminants in the saturated and unsaturated sediments of the Hanford Site. This model was designed to use the water movement patterns produced by the unsaturated and saturated flow models coupled with dispersion and soil-waste reaction submodels to predict contaminant transport. This report documents the theorical foundation and the numerical solution procedure of the current (third) generation of the MMT Model. The present model simulates mass transport processes using an analog referred to as the Discrete-Parcel-Random-Walk (DPRW) algorithm. The basic concepts of this solution technique are described and the advantages and disadvantages of the DPRW scheme are discussed in relation to more conventional numerical techniques such as the finite-difference and finite-element methods. Verification of the numerical algorithm is demonstrated by comparing model results with known closed-form solutions. A brief error and sensitivity analysis of the algorithm with respect to numerical parameters is also presented. A simulation of the tritium plume beneath the Hanford Site is included to illustrate the use of the model in a typical application. 32 figs

  8. THE ELECTRONIC COURSE OF HEAT AND MASS TRANSFER

    Directory of Open Access Journals (Sweden)

    Alexander P. Solodov

    2013-01-01

    Full Text Available The Electronic course of heat and mass transfer in power engineering is presented containing the full Electronic book as the structured hypertext document, the full set of Mathcad-documents with the whole set of educative computer models of heat and mass transfer, the computer labs, and selected educational presentations. 

  9. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  10. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  11. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  12. THE MASS DISTRIBUTION OF COMPANIONS TO LOW-MASS WHITE DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Jeff J.; Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States)

    2014-12-20

    Measuring the masses of companions to single-line spectroscopic binary stars is (in general) not possible because of the unknown orbital plane inclination. Even when the mass of the visible star can be measured, only a lower limit can be placed on the mass of the unseen companion. However, since these inclination angles should be isotropically distributed, for a large enough, unbiased sample, the companion mass distribution can be deconvolved from the distribution of observables. In this work, we construct a hierarchical probabilistic model to infer properties of unseen companion stars given observations of the orbital period and projected radial velocity of the primary star. We apply this model to three mock samples of low-mass white dwarfs (LMWDs; M ≲ 0.45 M {sub ☉}) and a sample of post-common-envelope binaries. We use a mixture of two Gaussians to model the WD and neutron star (NS) companion mass distributions. Our model successfully recovers the initial parameters of these test data sets. We then apply our model to 55 WDs in the extremely low-mass (ELM) WD Survey. Our maximum a posteriori model for the WD companion population has a mean mass μ{sub WD} = 0.74 M {sub ☉}, with a standard deviation σ{sub WD} = 0.24 M {sub ☉}. Our model constrains the NS companion fraction f {sub NS} to be <16% at 68% confidence. We make samples from the posterior distribution publicly available so that future observational efforts may compute the NS probability for newly discovered LMWDs.

  13. Minimalistic Neutrino Mass Model

    CERN Document Server

    De Gouvêa, A; Gouvea, Andre de

    2001-01-01

    We consider the simplest model which solves the solar and atmospheric neutrino puzzles, in the sense that it contains the smallest amount of beyond the Standard Model ingredients. The solar neutrino data is accounted for by Planck-mass effects while the atmospheric neutrino anomaly is due to the existence of a single right-handed neutrino at an intermediate mass scale between 10^9 GeV and 10^14 GeV. Even though the neutrino mixing angles are not exactly predicted, they can be naturally large, which agrees well with the current experimental situation. Furthermore, the amount of lepton asymmetry produced in the early universe by the decay of the right-handed neutrino is very predictive and may be enough to explain the current baryon-to-photon ratio if the right-handed neutrinos are produced out of thermal equilibrium. One definitive test for the model is the search for anomalous seasonal effects at Borexino.

  14. The Mathematical Model of Hydrodynamics and Heat and Mass Transfer at Formation of Steel Ingots and Castings

    Directory of Open Access Journals (Sweden)

    Bondarenko V.I.

    2015-03-01

    Full Text Available The generic mathematical model and computational algorithm considering hydrodynamics, heat and mass transfer processes during casting and forming steel ingots and castings are offered. Usage domains for turbulent, convective and non-convective models are determined depending on ingot geometry and thermal overheating of the poured melt. The expert system is developed, enabling to choose a mathematical model depending on the physical statement of a problem.

  15. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  16. On the modelling of turbulent heat and mass transfer for the computation of buoyancy affected flows

    International Nuclear Information System (INIS)

    Viollet, P.-L.

    1981-02-01

    The k - epsilon eddy viscosity turbulence model is applied to simple test cases of buoyant flows. Vertical as horizontal stable flows are nearly well represented by the computation, and in unstable flows the mixing is underpredicted. The general agreement is good enough for allowing application to thermal-fluid engineering problems

  17. Multicomponent mass transport model: a model for simulating migration of radionuclides in ground water

    International Nuclear Information System (INIS)

    Washburn, J.F.; Kaszeta, F.E.; Simmons, C.S.; Cole, C.R.

    1980-07-01

    This report presents the results of the development of a one-dimensional radionuclide transport code, MMT2D (Multicomponent Mass Transport), for the AEGIS Program. Multicomponent Mass Transport is a numerical solution technique that uses the discrete-parcel-random-wald (DPRW) method to directly simulate the migration of radionuclides. MMT1D accounts for: convection;dispersion; sorption-desorption; first-order radioactive decay; and n-membered radioactive decay chains. Comparisons between MMT1D and an analytical solution for a similar problem show that: MMT1D agrees very closely with the analytical solution; MMT1D has no cumulative numerical dispersion like that associated with solution techniques such as finite differences and finite elements; for current AEGIS applications, relatively few parcels are required to produce adequate results; and the power of MMT1D is the flexibility of the code in being able to handle complex problems for which analytical solution cannot be obtained. Multicomponent Mass Transport (MMT1D) codes were developed at Pacific Northwest Laboratory to predict the movement of radiocontaminants in the saturated and unsaturated sediments of the Hanford Site. All MMT models require ground-water flow patterns that have been previously generated by a hydrologic model. This report documents the computer code and operating procedures of a third generation of the MMT series: the MMT differs from previous versions by simulating the mass transport processes in systems with radionuclide decay chains. Although MMT is a one-dimensional code, the user is referred to the documentation of the theoretical and numerical procedures of the three-dimensional MMT-DPRW code for discussion of expediency, verification, and error-sensitivity analysis

  18. Modelling baryonic effects on galaxy cluster mass profiles

    Science.gov (United States)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-06-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  19. Modelling Baryonic Effects on Galaxy Cluster Mass Profiles

    Science.gov (United States)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-03-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  20. Computational and experimental analysis of supersonic air ejector: Turbulence modeling and assessment of 3D effects

    International Nuclear Information System (INIS)

    Mazzelli, Federico; Little, Adrienne B.; Garimella, Srinivas; Bartosiewicz, Yann

    2015-01-01

    Highlights: • Computational and experimental assessment of computational techniques for ejector flows. • Comparisons to 2D/3D (k–ε, k–ε realizable, k–ω SST, and stress–ω RSM) turbulence models. • k–ω SST model performs best while ε-based models more accurate at low motive pressures. • Good on-design agreement across 2D and 3D models; off-design needs 3D simulations. - Abstract: Numerical and experimental analyses are performed on a supersonic air ejector to evaluate the effectiveness of commonly-used computational techniques when predicting ejector flow characteristics. Three series of experimental curves at different operating conditions are compared with 2D and 3D simulations using RANS, steady, wall-resolved models. Four different turbulence models are tested: k–ε, k–ε realizable, k–ω SST, and the stress–ω Reynolds Stress Model. An extensive analysis is performed to interpret the differences between numerical and experimental results. The results show that while differences between turbulence models are typically small with respect to the prediction of global parameters such as ejector inlet mass flow rates and Mass Entrainment Ratio (MER), the k–ω SST model generally performs best whereas ε-based models are more accurate at low motive pressures. Good agreement is found across all 2D and 3D models at on-design conditions. However, prediction at off-design conditions is only acceptable with 3D models, making 3D simulations mandatory to correctly predict the critical pressure and achieve reasonable results at off-design conditions. This may partly depend on the specific geometry under consideration, which in the present study has a rectangular cross section with low aspect ratio.

  1. On the origin of mass in the standard model

    International Nuclear Information System (INIS)

    Sundman, S.

    2013-01-01

    A model is proposed in which the presently existing elementary particles are the result of an evolution proceeding from the simplest possible particle state to successively more complex states via a series of symmetry-breaking transitions. The properties of two fossil particles — the tauon and muon — together with the observed photon–baryon number ratio provide information that makes it possible to track the early development of particles. A computer simulation of the evolution reveals details about the purpose and history of all presently known elementary particles. In particular, it is concluded that the heavy Higgs particle that generates the bulk of the mass of the Z and W bosons also comes in a light version, which generates small mass contributions to the charged leptons. The predicted mass of this 'flyweight' Higgs boson is 0.505 MeV/c 2 , 106.086 eV/c 2 or 12.0007 μeV/c 2 (corresponding to a photon of frequency 2.9018 GHz) depending on whether it is associated with the tauon, muon or electron. Support for the conclusion comes from the Brookhaven muon g-2 experiment, which indicates the existence of a Higgs particle lighter than the muon. (author)

  2. SEMIC: an efficient surface energy and mass balance model applied to the Greenland ice sheet

    Directory of Open Access Journals (Sweden)

    M. Krapp

    2017-07-01

    Full Text Available We present SEMIC, a Surface Energy and Mass balance model of Intermediate Complexity for snow- and ice-covered surfaces such as the Greenland ice sheet. SEMIC is fast enough for glacial cycle applications, making it a suitable replacement for simpler methods such as the positive degree day (PDD method often used in ice sheet modelling. Our model explicitly calculates the main processes involved in the surface energy and mass balance, while maintaining a simple interface and requiring minimal data input to drive it. In this novel approach, we parameterise diurnal temperature variations in order to more realistically capture the daily thaw–freeze cycles that characterise the ice sheet mass balance. We show how to derive optimal model parameters for SEMIC specifically to reproduce surface characteristics and day-to-day variations similar to the regional climate model MAR (Modèle Atmosphérique Régional, version 2 and its incorporated multilayer snowpack model SISVAT (Soil Ice Snow Vegetation Atmosphere Transfer. A validation test shows that SEMIC simulates future changes in surface temperature and surface mass balance in good agreement with the more sophisticated multilayer snowpack model SISVAT included in MAR. With this paper, we present a physically based surface model to the ice sheet modelling community that is general enough to be used with in situ observations, climate model, or reanalysis data, and that is at the same time computationally fast enough for long-term integrations, such as glacial cycles or future climate change scenarios.

  3. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  4. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  5. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  6. Limit on mass differences in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.

    1977-01-01

    Within the Weinberg model mass differences between members of a multiplet generate further mass differences between the neutral and charged vector bosons. The experimental situation on the Weinberg model leads to an upper limit of about 800 GeV on mass differences within a multiplet. No limit on the

  7. Calculation of mass discharge of the Greenland ice sheet in the Earth System Model

    Directory of Open Access Journals (Sweden)

    O. O. Rybak

    2016-01-01

    our results with similar model studies. In addition to the atmospheric, oceanic, and ice sheet blocks the ESM normally contains blocks accounting for dynamics of the biosphere, sea ice, hydrological cycle, etc. In practice, application of ESMs for research studies has become possible not long ago owing to the fast progress in computing facilities. Nevertheless, still now ESMs are rather computer time demanding. To provide long runs of a fully coupled ESM at a lower computational cost, we utilized an asynchronous coupling when a 100-yr run of the GrISM corresponds to 1-yr run of the INMCM. The weak point of the numerical experiments is comparison of the results with observations. The lack of observations in Greenland and significant inter-annual variability of air temperature, precipitation, surface melting, and run off do not allow formulation of a reliable reference «climate» and corresponding equilibrium state of GrIS. In practice it means that more or less accurate estimates of the past or future changes of the runoff and total GrIS mass discharge is reasonable to obtain in the form of deviations from a reference undisturbed model state.

  8. Accretion flow dynamics during 1999 outburst of XTE J1859+226—modeling of broadband spectra and constraining the source mass

    Science.gov (United States)

    Nandi, Anuj; Mandal, S.; Sreehari, H.; Radhika, D.; Das, Santabrata; Chattopadhyay, I.; Iyer, N.; Agrawal, V. K.; Aktar, R.

    2018-05-01

    We examine the dynamical behavior of accretion flow around XTE J1859+226 during the 1999 outburst by analyzing the entire outburst data (˜166 days) from RXTE Satellite. Towards this, we study the hysteresis behavior in the hardness intensity diagram (HID) based on the broadband (3-150 keV) spectral modeling, spectral signature of jet ejection and the evolution of Quasi-periodic Oscillation (QPO) frequencies using the two-component advective flow model around a black hole. We compute the flow parameters, namely Keplerian accretion rate (\\dot{m}d), sub-Keplerian accretion rate (\\dot{m}h), shock location (rs) and black hole mass (M_{bh}) from the spectral modeling and study their evolution along the q-diagram. Subsequently, the kinetic jet power is computed as L^{obs}_{jet} ˜3-6 ×10^{37} erg s^{-1} during one of the observed radio flares which indicates that jet power corresponds to 8-16% mass outflow rate from the disc. This estimate of mass outflow rate is in close agreement with the change in total accretion rate (˜14%) required for spectral modeling before and during the flare. Finally, we provide a mass estimate of the source XTE J1859+226 based on the spectral modeling that lies in the range of 5.2-7.9 M_{⊙} with 90% confidence.

  9. An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64.

    Science.gov (United States)

    Winkler, Robert

    2015-01-01

    In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as 'workflow decay', can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein-protein interactions. Data Mining derived models

  10. Computer tomographic and angiographic studies of histologically confirmed intrahepatic masses

    International Nuclear Information System (INIS)

    Janson, R.; Lackner, K.; Paquet, K.J.; Thelen, M.; Thurn, P.

    1980-01-01

    The computer tomographic and angiographic findings in 53 patients with intrahepatic masses were compared. The histological findings show that 17 were due to echinococcus, 12 were due to hepatic carcinoma, ten were metastases, five patients had focal nodular hyperplasia, three an alveolar echinococcus and there were three cases with an haemangioma of the liver and a further three liver abscesses. Computer tomography proved superior in peripherally situated lesions, and in those in the left lobe of the liver. Arteriography was better at demonstrating lesions below 2 cm in size, particularly vascular tumours. As a pre-operative measure, angiography is to be preferred since it is able to demonstrate anatomic anomalies and variations in the blood supply, as well as invasion of the portal vein or of the inferior vena cava. (orig.) [de

  11. Computer tomographic and angiographic studies of histologically confirmed intrahepatic masses

    Energy Technology Data Exchange (ETDEWEB)

    Janson, R.; Lackner, K.; Paquet, K.J.; Thelen, M.; Thurn, P.

    1980-06-01

    The computer tomographic and angiographic findings in 53 patients with intrahepatic masses were compared. The histological findings show that 17 were due to echinococcus, 12 were due to hepatic carcinoma, ten were metastases, five patients had focal nodular hyperplasia, three an alveolar echinococcus and there were three cases with an haemangioma of the liver and a further three liver abscesses. Computer tomography proved superior in peripherally situated lesions, and in those in the left lobe of the liver. Arteriography was better at demonstrating lesions below 2 cm in size, particularly vascular tumours. As a pre-operative measure, angiography is to be preferred since it is able to demonstrate anatomic anomalies and variations in the blood supply, as well as invasion of the portal vein or of the inferior vena cava.

  12. A CFD model for determining mixing and mass transfer in a high power agitated bioreactor

    DEFF Research Database (Denmark)

    Bach, Christian; Albæk, Mads O.; Stocks, Stuart M.

    performance of a high power agitated pilot scale bioreactor has been characterized using a novel combination of computational fluid dynamics (CFD) and experimental investigations. The effect of turbulence inside the vessel was found to be most efficiently described by using the k-ε model with regards...... simulations, and the overall mass transfer coefficient was found to be in accordance with experimental data. This work illustrates the possibility of predicting the hydrodynamic performance of an agitated bioreactor using validated CFD models. These models can be applied in the testing of new bioreactor...

  13. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  14. Coupled sulfur isotopic and chemical mass transfer modeling: Approach and application to dynamic hydrothermal processes

    International Nuclear Information System (INIS)

    Janecky, D.R.

    1988-01-01

    A computational modeling code (EQPSreverse arrowS) has been developed to examine sulfur isotopic distribution pathways coupled with calculations of chemical mass transfer pathways. A post processor approach to EQ6 calculations was chosen so that a variety of isotopic pathways could be examined for each reaction pathway. Two types of major bounding conditions were implemented: (1) equilibrium isotopic exchange between sulfate and sulfide species or exchange only accompanying chemical reduction and oxidation events, and (2) existence or lack of isotopic exchange between solution species and precipitated minerals, parallel to the open and closed chemical system formulations of chemical mass transfer modeling codes. All of the chemical data necessary to explicitly calculate isotopic distribution pathways is generated by most mass transfer modeling codes and can be input to the EQPS code. Routines are built in to directly handle EQ6 tabular files. Chemical reaction models of seafloor hydrothermal vent processes and accompanying sulfur isotopic distribution pathways illustrate the capabilities of coupling EQPSreverse arrowS with EQ6 calculations, including the extent of differences that can exist due to the isotopic bounding condition assumptions described above. 11 refs., 2 figs

  15. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  16. Climate Change Discourse in Mass Media: Application of Computer-Assisted Content Analysis

    Science.gov (United States)

    Kirilenko, Andrei P.; Stepchenkova, Svetlana O.

    2012-01-01

    Content analysis of mass media publications has become a major scientific method used to analyze public discourse on climate change. We propose a computer-assisted content analysis method to extract prevalent themes and analyze discourse changes over an extended period in an objective and quantifiable manner. The method includes the following: (1)…

  17. C-periodicity and the physical mass in the 3-state Potts model

    International Nuclear Information System (INIS)

    Gavai, R.V.; Polley, L.

    1993-07-01

    The standard infinite-volume definition of connected correlation function and particle mass in the 3-state Potts model can be implemented in Monte Carlo simulations by using C-periodic spatial boundary conditions. This avoids both the breaking of translation invariance (cold wall b.c.) and the phase-dependent and thus possible biased evaluation of data (periodic b.c.). The numerical feasibility of the standard definitions is demonstrated by sample computations on a 24 x 24 x 48 lattice. (author). 11 refs, 5 figs, 1 tab

  18. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  19. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  20. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  1. Morphodynamic Modeling Using The SToRM Computational System

    Science.gov (United States)

    Simoes, F.

    2016-12-01

    The framework of the work presented here is the open source SToRM (System for Transport and River Modeling) eco-hydraulics modeling system, which is one of the models released with the iRIC hydraulic modeling graphical software package (http://i-ric.org/). SToRM has been applied to the simulation of various complex environmental problems, including natural waterways, steep channels with regime transition, and rapidly varying flood flows with wetting and drying fronts. In its previous version, however, channel bed was treated as static and the ability of simulating sediment transport rates or bed deformation was not included. The work presented here reports SToRM's newly developed extensions to expand the system's capability to calculate morphological changes in alluvial river systems. The sediment transport module of SToRM has been developed based on the general recognition that meaningful advances depend on physically solid formulations and robust and accurate numerical solution methods. The basic concepts of mass and momentum conservation are used, where the feedback mechanisms between the flow of water, the sediment in transport, and the bed changes are directly incorporated in the governing equations used in the mathematical model. This is accomplished via a non-capacity transport formulation based on the work of Cao et al. [Z. Cao et al., "Non-capacity or capacity model for fluvial sediment transport," Water Management, 165(WM4):193-211, 2012], where the governing equations are augmented with source/sink terms due to water-sediment interaction. The same unsteady, shock-capturing numerical schemes originally used in SToRM were adapted to the new physics, using a control volume formulation over unstructured computational grids. The presentation will include a brief overview of these methodologies, and the result of applications of the model to a number of relevant physical test cases with movable bed, where computational results are compared to experimental data.

  2. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  3. Computer model verification for seismic analysis of vertical pumps and motors

    International Nuclear Information System (INIS)

    McDonald, C.K.

    1993-01-01

    The general principles of modeling vertical pumps and motors are discussed and then two examples of verifying the models are presented in detail. The first examples is a vertical pump and motor assembly. The model and computer analysis are presented and the first four modes (frequencies) calculated are compared to the values of the same modes obtained from a shaker test. The model used for this example is a lumped mass connected by massless beams model. The shaker test was performed by National Technical Services, Los Angeles, CA. The second example is a larger vertical motor. The model used for this example is a finite element three dimensional shell model. The first frequency obtained from this model is compared to the first frequency obtained from shop tests for several different motors. The shop tests were performed by Reliance Electric, Stratford, Ontario and Siemens-Allis, Inc., Norwood, Ohio

  4. Unification of gauge couplings in radiative neutrino mass models

    DEFF Research Database (Denmark)

    Hagedorn, Claudia; Ohlsson, Tommy; Riad, Stella

    2016-01-01

    masses at one-loop level and (III) models with particles in the adjoint representation of SU(3). In class (I), gauge couplings unify in a few models and adding dark matter amplifies the chances for unification. In class (II), about a quarter of the models admits gauge coupling unification. In class (III......We investigate the possibility of gauge coupling unification in various radiative neutrino mass models, which generate neutrino masses at one- and/or two-loop level. Renormalization group running of gauge couplings is performed analytically and numerically at one- and two-loop order, respectively....... We study three representative classes of radiative neutrino mass models: (I) minimal ultraviolet completions of the dimension-7 ΔL = 2 operators which generate neutrino masses at one- and/or two-loop level without and with dark matter candidates, (II) models with dark matter which lead to neutrino...

  5. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  6. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  7. Cancer Vaccines: State of the Art of the Computational Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Francesco Pappalardo

    2013-01-01

    Full Text Available Cancer vaccines are a real application of the extensive knowledge of immunology to the field of oncology. Tumors are dynamic complex systems in which several entities, events, and conditions interact among them resulting in growth, invasion, and metastases. The immune system includes many cells and molecules that cooperatively act to protect the host organism from foreign agents. Interactions between the immune system and the tumor mass include a huge number of biological factors. Testing of some cancer vaccine features, such as the best conditions for vaccine administration or the identification of candidate antigenic stimuli, can be very difficult or even impossible only through experiments with biological models simply because a high number of variables need to be considered at the same time. This is where computational models, and, to this extent, immunoinformatics, can prove handy as they have shown to be able to reproduce enough biological complexity to be of use in suggesting new experiments. Indeed, computational models can be used in addition to biological models. We now experience that biologists and medical doctors are progressively convinced that modeling can be of great help in understanding experimental results and planning new experiments. This will boost this research in the future.

  8. Effects of confinement on rock mass modulus: A synthetic rock mass modelling (SRM study

    Directory of Open Access Journals (Sweden)

    I. Vazaios

    2018-06-01

    Full Text Available The main objective of this paper is to examine the influence of the applied confining stress on the rock mass modulus of moderately jointed rocks (well interlocked undisturbed rock mass with blocks formed by three or less intersecting joints. A synthetic rock mass modelling (SRM approach is employed to determine the mechanical properties of the rock mass. In this approach, the intact body of rock is represented by the discrete element method (DEM-Voronoi grains with the ability of simulating the initiation and propagation of microcracks within the intact part of the model. The geometry of the pre-existing joints is generated by employing discrete fracture network (DFN modelling based on field joint data collected from the Brockville Tunnel using LiDAR scanning. The geometrical characteristics of the simulated joints at a representative sample size are first validated against the field data, and then used to measure the rock quality designation (RQD, joint spacing, areal fracture intensity (P21, and block volumes. These geometrical quantities are used to quantitatively determine a representative range of the geological strength index (GSI. The results show that estimating the GSI using the RQD tends to make a closer estimate of the degree of blockiness that leads to GSI values corresponding to those obtained from direct visual observations of the rock mass conditions in the field. The use of joint spacing and block volume in order to quantify the GSI value range for the studied rock mass suggests a lower range compared to that evaluated in situ. Based on numerical modelling results and laboratory data of rock testing reported in the literature, a semi-empirical equation is proposed that relates the rock mass modulus to confinement as a function of the areal fracture intensity and joint stiffness. Keywords: Synthetic rock mass modelling (SRM, Discrete fracture network (DFN, Rock mass modulus, Geological strength index (GSI, Confinement

  9. Computational analyses of spectral trees from electrospray multi-stage mass spectrometry to aid metabolite identification.

    Science.gov (United States)

    Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne

    2013-10-31

    Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.

  10. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  11. $J/\\Psi$ mass shift in nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Gastao Krein, Anthony Thomas, Kazuo Tsushima

    2011-02-01

    The $J/\\Psi$ mass shift in cold nuclear matter is computed using an effective Lagrangian approach. The mass shift is computed by evaluating $D$ and $D^*$ meson loop contributions to the $J/\\Psi$ self-energy employing medium-modified meson masses. The modification of the $D$ and $D^*$ masses in nuclear matter is obtained using the quark-meson coupling model. The loop integrals are regularized with dipole form factors and the sensitivity of the results to the values of form-factor cutoff masses is investigated. The $J/\\Psi$ mass shift arising from the modification of the $D$ and $D^*$ loops at normal nuclear matter density is found to range from $-16$~MeV to $-24$~MeV under a wide variation of values of the cutoff masses. Experimental perspectives for the formation of a bound state of $J/\\Psi$ to a nucleus are investigated.

  12. Improved mammographic interpretation of masses using computer-aided diagnosis

    International Nuclear Information System (INIS)

    Leichter, I.; Fields, S.; Novak, B.; Nirel, R.; Bamberger, P.; Lederman, R.; Buchbinder, S.

    2000-01-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p z ) increased significantly (p z for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that of the unassisted radiologist. Our findings suggest that objective quantitative features extracted from digitized mammographic findings may help in differentiating between benign and malignant masses, and can assist the radiologist in the interpretation of mass lesions. (orig.)

  13. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  14. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  15. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  16. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  17. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  18. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  19. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  20. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Introduction to models of neutrino masses and mixings

    International Nuclear Information System (INIS)

    Joshipura, Anjan S.

    2004-01-01

    This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)

  2. Model for the generation of leptonic mass

    International Nuclear Information System (INIS)

    Fryberger, D.

    1979-01-01

    A self-consistent model for the generation of leptonic mass is developed. In this model it is assumed that bare masses are zero, all of the (charged) leptonic masses being generated by the QED self-interaction. A perturbation expansion for the QED self-mass is formulated, and contact is made between this expansion and the work of Landau and his collaborators. In order to achieve a finite result using this expansion, it is assumed that there is a cutoff at the Landau singularity and that the functional form of the (self-mass) integrand is the same beyond that singularity as it is below. Physical interpretations of these assumptions are discussed. Self-consistency equations are obtained which show that the Landau singularity is in the neighborhood of the Planck mass. This result implies that, as originally suggested by Landau, gravitation may play a role in an ultraviolet cutoff for QED. These equations also yield estimates for the (effective) number of additional pointlike particles that electromagnetically couple to the photon. This latter quantity is consistent with present data from e + e - storage rings

  3. A Method for Estimating Mass-Transfer Coefficients in a Biofilter from Membrane Inlet Mass Spectrometer Data

    DEFF Research Database (Denmark)

    Nielsen, Anders Michael; Nielsen, Lars Peter; Feilberg, Anders

    2009-01-01

    A membrane inlet mass spectrometer (MIMS) was used in combination with a developed computer model to study and improve management of a biofilter (BF) treating malodorous ventilation air from a meat rendering facility. The MIMS was used to determine percentage removal efficiencies (REs) of selected...... sulfur gases and to provide toluene retention profiles for the model to determine the air velocity and overall mass-transfer coefficient of toluene. The mass-transfer coefficient of toluene was used as a reference for determining the mass transfer of sulfur gases. By presenting the model to scenarios...... of a filter bed with a consortium of effective sulfur oxidizers, the most likely mechanism for incomplete removal of sulfur compounds from the exhaust air was elucidated. This was found to be insufficient mass transfer and not inadequate bacterial activity as anticipated by the manager of the BF. Thus...

  4. CFD-model of the mass transfer in the vertical settler

    Directory of Open Access Journals (Sweden)

    E. K. Nagornaya

    2013-02-01

    Full Text Available Purpose. Nowadays the mathematical models of the secondary settlers are intensively developed. As a rule the engineers use the 0-D models or 1-D models to design settlers. But these models do not take into account the hydrodynamics process inside the settler and its geometrical form. That is why the CFD-models based on Navier - Stokes equations are not widely used in practice now. The use of CFD-models based on Navier - Stokes equations needs to incorporate very refine grid. It is very actually now to develop the CFD-models which permit to take into account the geometrical form of the settler, the most important physical processes and needs small computer time for calculation. That is why the development of the 2-D numerical model for the investigation of the waste waters transfer in the vertical settlers which permits to take into account the geometrical form and the constructive features of the settler is essential. Methodology. The finite - difference schemes are applied. Findings. The new 2-D-CFD-model was developed, which permits to perform the CFD investigation of the vertical settler. This model takes into account the geometrical form of the settler, the central pipe inside it and others peculiarities. The method of «porosity technique» is used to create the geometrical form of the settler in the numerical model. This technique permits to build any geometrical form of the settler for CFD investigation. Originality. Making of CFD-model which permits on the one hand to take into account the geometrical form of the settler, basic physical processes of mass transfer in construction and on the other hand requiring the low time cost in order to obtain results. Practical value. CFD-model is designed and code which is constructed on its basis allows at low cost of computer time and about the same as in the calculation of the 1-D model to solve complex multiparameter problems that arise during the design of vertical settlers with their shape and

  5. MODELS OF NEPTUNE-MASS EXOPLANETS: EMERGENT FLUXES AND ALBEDOS

    International Nuclear Information System (INIS)

    Spiegel, David S.; Burrows, Adam; Ibgui, Laurent; Hubeny, Ivan; Milsom, John A.

    2010-01-01

    There are now many known exoplanets with Msin i within a factor of 2 of Neptune's, including the transiting planets GJ 436b and HAT-P-11b. Planets in this mass range are different from their more massive cousins in several ways that are relevant to their radiative properties and thermal structures. By analogy with Neptune and Uranus, they are likely to have metal abundances that are an order of magnitude or more greater than those of larger, more massive planets. This increases their opacity, decreases Rayleigh scattering, and changes their equation of state. Furthermore, their smaller radii mean that fluxes from these planets are roughly an order of magnitude lower than those of otherwise identical gas giant planets. Here, we compute a range of plausible radiative equilibrium models of GJ 436b and HAT-P-11b. In addition, we explore the dependence of generic Neptune-mass planets on a range of physical properties, including their distance from their host stars, their metallicity, the spectral type of their stars, the redistribution of heat in their atmospheres, and the possible presence of additional optical opacity in their upper atmospheres.

  6. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  7. Modelling of Mass Transfer Phenomena in Chemical and Biochemical Reactor Systems using Computational Fluid Dynamics

    DEFF Research Database (Denmark)

    Larsson, Hilde Kristina

    the velocity and pressure distributions in a fluid. CFD also enables the modelling of several fluids simultaneously, e.g. gas bubbles in a liquid, as well as the presence of turbulence and dissolved chemicals in a fluid, and many other phenomena. This makes CFD an appreciated tool for studying flow structures......, mixing, and other mass transfer phenomena in chemical and biochemical reactor systems. In this project, four selected case studies are investigated in order to explore the capabilities of CFD. The selected cases are a 1 ml stirred microbioreactor, an 8 ml magnetically stirred reactor, a Rushton impeller...... and an ion-exchange reaction are also modelled and compared to experimental data. The thesis includes a comprehensive overview of the fundamentals behind a CFD software, as well as a more detailed review of the fluid dynamic phenomena investigated in this project. The momentum and continuity equations...

  8. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  9. Evaluation of incompressible hydrodynamic mass methods in reactor applications

    International Nuclear Information System (INIS)

    Takeuchi, K.

    1981-01-01

    The hydrodynamic (or virtual) mass approach is evaluated by comparison of structural responses computed by the hydrodynamic mass method with those computed by MULTIFLEX code for a fluid/structure interaction problem with fluid compression effects taken into account. A sample problem used in that evaluation is a simplified 1-D PWR model which is first subjected to a LOCA type transient. The time history of structural displacement computed with the hydrodynamic mass approach is compared with MULTIFLEX results. The frequencies of structural oscillation of these two computations agree. The amplitudes disagree by more than 50%, which is attributed to the effect of fluid compressibility. For the seismic study, sinusoidal forces are applied to the floor at the vessel support. The system responses are expressed by the response functions or the maximum values of the barrel/vessel relative displacements as the applied frequency is varied. The response functions are computed by the hydrodynamic mass method and by MULTIFLEX for evaluation of the virtual mass method. For the pump pulsation study, sinusoidal pressure oscillations are applied at the pump outlet and the response functions are computed as above. 12 refs

  10. A Computational Model of Water Migration Flux in Freezing Soil in a Closed System

    Institute of Scientific and Technical Information of China (English)

    裘春晗

    2005-01-01

    A computational model of water migration flux of fine porous soil in frost heave was investigated in a closed system. The model was established with the heat-mass conservation law and from some previous experimental results. Through defining an auxiliary function an empirical function in the water migration flux, which is difficult to get, was replaced. The data needed are about the water content along the soft colunm after test with enough long time. We adopt the test data of sample soil colunms in [1] to verify the model. The result shows it can reflect the real situation on the whole.

  11. Two-phase wall friction model for the trace computer code

    International Nuclear Information System (INIS)

    Wang Weidong

    2005-01-01

    The wall drag model in the TRAC/RELAP5 Advanced Computational Engine computer code (TRACE) has certain known deficiencies. For example, in an annular flow regime, the code predicts an unphysical high liquid velocity compared to the experimental data. To address those deficiencies, a new wall frictional drag package has been developed and implemented in the TRACE code to model the wall drag for two-phase flow system code. The modeled flow regimes are (1) annular/mist, (2) bubbly/slug, and (3) bubbly/slug with wall nucleation. The new models use void fraction (instead of flow quality) as the correlating variable to minimize the calculation oscillation. In addition, the models allow for transitions between the three regimes. The annular/mist regime is subdivided into three separate regimes for pure annular flow, annular flow with entrainment, and film breakdown. For adiabatic two-phase bubbly/slug flows, the vapor phase primarily exists outside of the boundary layer, and the wall shear uses single-phase liquid velocity for friction calculation. The vapor phase wall friction drag is set to zero for bubbly/slug flows. For bubbly/slug flows with wall nucleation, the bubbles are presented within the hydrodynamic boundary layer, and the two-phase wall friction drag is significantly higher with a pronounced mass flux effect. An empirical correlation has been studied and applied to account for nucleate boiling. Verification and validation tests have been performed, and the test results showed a significant code improvement. (authors)

  12. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  13. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  14. The Gogny-Hartree-Fock-Bogoliubov nuclear-mass model

    Energy Technology Data Exchange (ETDEWEB)

    Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium); Hilaire, S.; Girod, M.; Peru, S. [CEA, DAM, DIF, Arpajon (France)

    2016-07-15

    We present the Gogny-Hartree-Fock-Bogoliubov model which reproduces nuclear masses with an accuracy comparable with the best mass formulas. In contrast to the Skyrme-HFB nuclear-mass models, an explicit and self-consistent account of all the quadrupole correlation energies is included within the 5D collective Hamiltonian approach. The final rms deviation with respect to the 2353 measured masses is 789 keV in the 2012 atomic mass evaluation. In addition, the D1M Gogny force is shown to predict nuclear and neutron matter properties in agreement with microscopic calculations based on realistic two- and three-body forces. The D1M properties and its predictions of various observables are compared with those of D1S and D1N. (orig.)

  15. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  16. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  17. Mass generation in composite models

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1985-10-01

    I discuss aspects of composite models of quarks and leptons connected with the dynamics of how these fermions acquire mass. Several issues related to the protection mechanisms necessary to keep quarks and leptons light are illustrated by means of concrete examples and a critical overview of suggestions for family replications is given. Some old and new ideas of how one may actually be able to generate small quark and lepton masses are examined, along with some of the difficulties they encounter in practice. (orig.)

  18. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    Science.gov (United States)

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present

  19. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  20. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  1. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  2. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  3. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  4. Schwinger Model Mass Anomalous Dimension

    CERN Document Server

    Keegan, Liam

    2016-06-20

    The mass anomalous dimension for several gauge theories with an infrared fixed point has recently been determined using the mode number of the Dirac operator. In order to better understand the sources of systematic error in this method, we apply it to a simpler model, the massive Schwinger model with two flavours of fermions, where analytical results are available for comparison with the lattice data.

  5. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  6. Quantum Vertex Model for Reversible Classical Computing

    Science.gov (United States)

    Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng

    We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.

  7. Neutrino masses, mixings, and FCNC’s in an S3 flavor symmetric extension of the standard model

    International Nuclear Information System (INIS)

    Mondragón, A.; Mondragón, M.; Peinado, E.

    2011-01-01

    By introducing threeHiggs fields that are SU(2) doublets and a flavor permutational symmetry, S 3 , in the theory, we extend the concepts of flavor and generations to the Higgs sector and formulate a Minimal S 3 -Invariant Extension of the Standard Model. The mass matrices of the neutrinos and charged leptons are re-parameterized in terms of their eigenvalues, then the neutrino mixing matrix, V PMNS , is computed and exact, explicit analytical expressions for the neutrino mixing angles as functions of the masses of neutrinos and charged leptons are obtained in excellent agreement with the latest experimental data. We also compute the branching ratios of some selected flavor-changing neutral current (FCNC) processes, as well as the contribution of the exchange of neutral flavor-changing scalars to the anomaly of the magnetic moment of the muon, as functions of the masses of charged leptons and the neutral Higgs bosons. We find that the S 3 × Z 2 flavor symmetry and the strong mass hierarchy of the charged leptons strongly suppress the FCNC processes in the leptonic sector, well below the present experimental bounds by many orders of magnitude. The contribution of FCNC’s to the anomaly of the muon’s magnetic moment is small, but not negligible.

  8. Multi-valley effective mass theory for device-level modeling of open quantum dynamics

    Science.gov (United States)

    Jacobson, N. Tobias; Baczewski, Andrew D.; Frees, Adam; Gamble, John King; Montano, Ines; Moussa, Jonathan E.; Muller, Richard P.; Nielsen, Erik

    2015-03-01

    Simple models for semiconductor-based quantum information processors can provide useful qualitative descriptions of device behavior. However, as experimental implementations have matured, more specific guidance from theory has become necessary, particularly in the form of quantitatively reliable yet computationally efficient modeling. Besides modeling static device properties, improved characterization of noisy gate operations requires a more sophisticated description of device dynamics. Making use of recent developments in multi-valley effective mass theory, we discuss device-level simulations of the open system quantum dynamics of a qubit interacting with phonons and other noise sources. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  9. An assessment of mass burn incineration costs

    International Nuclear Information System (INIS)

    Fox, M.R.; Scutter, J.N.; Sutton, A.M.

    1993-01-01

    This study comprises the third and final part of a cost assessment exercise of waste-to-energy options. The specific objectives of this particular study were: to determine the capital and operating costs of three generic types of mass burn waste-to-energy systems, for waste inputs of 200,000 and 400,000 t/y of municipal solid waste (MSW); to verify the mass and energy balances of the systems; to develop a computer cost model to manipulate the data as required; to carry out sensitivity checks on the computer model of changes to key parameters; and to conduct the study in a manner approximating as closely as possible to a real commercial situation. (author)

  10. Computational Models Used to Assess US Tobacco Control Policies.

    Science.gov (United States)

    Feirman, Shari P; Glasser, Allison M; Rose, Shyanika; Niaura, Ray; Abrams, David B; Teplitskaya, Lyubov; Villanti, Andrea C

    2017-11-01

    Simulation models can be used to evaluate existing and potential tobacco control interventions, including policies. The purpose of this systematic review was to synthesize evidence from computational models used to project population-level effects of tobacco control interventions. We provide recommendations to strengthen simulation models that evaluate tobacco control interventions. Studies were eligible for review if they employed a computational model to predict the expected effects of a non-clinical US-based tobacco control intervention. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Six primary non-clinical intervention types were examined across the 40 studies: taxation, youth prevention, smoke-free policies, mass media campaigns, marketing/advertising restrictions, and product regulation. Simulation models demonstrated the independent and combined effects of these interventions on decreasing projected future smoking prevalence. Taxation effects were the most robust, as studies examining other interventions exhibited substantial heterogeneity with regard to the outcomes and specific policies examined across models. Models should project the impact of interventions on overall tobacco use, including nicotine delivery product use, to estimate preventable health and cost-saving outcomes. Model validation, transparency, more sophisticated models, and modeling policy interactions are also needed to inform policymakers to make decisions that will minimize harm and maximize health. In this systematic review, evidence from multiple studies demonstrated the independent effect of taxation on decreasing future smoking prevalence, and models for other tobacco control interventions showed that these strategies are expected to decrease smoking, benefit population health, and are reasonable to implement from a cost perspective. Our recommendations aim to help policymakers and researchers minimize harm and

  11. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  12. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  14. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  15. Consideration of turbulent deposition in aerosol behaviour modelling with the CONTAIN code and comparison of the computations to sodium release experiments

    International Nuclear Information System (INIS)

    Jonas, R.

    1988-09-01

    CONTAIN is a computer code to analyze physical, chemical and radiological processes inside the reactor containment in the sequence of severe reactor accident. Modelling of the aerosol behaviour is included. We have improved the code by implementing a subroutine for turbulent deposition of aerosols. In contrast to previous calculations in which this effect was neglected, the computer results are in good agreement with sodium release experiments. If a typical friction velocity of 1 m/s is chosen, the computed aerosol mass median diameters and aerosol mass concentrations agree with the experimental results within a factor of 1.5 or 2, respectively. We have also found a good agreement between the CONTAIN calculations and results from other aerosol codes. (orig.) [de

  16. Computed tomography of mediastinal masses

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Seong Tae; Lee, Jae Mun; Bahk, Yong Whee; Kim, Choon Yul [Catholic Medical College, Seoul (Korea, Republic of)

    1984-09-15

    The ability of CT scanning of the mediastinum to distinguish specific tissue densities and to display in a transverse plane often provides unique diagnostic information unobtainable with conventional radiographic methods. We retrospectively analyzed the CT findings of 20 cases of proven mediastinal masses at the Department of Radiology, St. Mary Hospital , Catholic Medical College from February 1982 to June 1984. CT scans were performed with a Siemens Somatom 2 scanner. The technical factors involved were tube voltage 125 kVp, exposure time 5 seconds, 230 mAs, 256 X 256 matrices, and pixel size 1.3 mm. 8 mm slices were obtained at 1 cm interval from the apex of the lung to the diaphragm. If necessary, additional scans at 5 mm interval or magnify scans were obtained. After pre-contrast scan, contrast scans were routinely taken with rapid drip-infusion of contrast media (60% Convey, 150 cc). The results obtained were as follows; 1. Among 20 cases, 11 were tumors, 4 infectious masses and 5 aneurysms of great vessels, tortuous brachiocephalic artery and pericardial fat pad. In each case CT showed accurate location, extent, and nature of the masses. 2. Solid tumors were thymic hyperplasias, thymoma, thymus carcinoid, neurilemmoma and germ cell tumors (seminoma, embryonal cell carcinoma). Internal architecture was homogeneous in thymoma, thymus carcinoid, neurilemmoma, seminoma but inhomogeneous in thymic hyperplasias and embryonal cell carcinoma. CT numbers ranged from 16 to 49 HU and were variably enhanced. 3. Cystic tumors consisted of teratomas, cystic hygroma, and neurilemmoma. Teratomas contained calcium and fat, inhomogeneous mass with strongly enhancing wall. Cystic hygroma was nonenhancing mass with HU of 20. 4. All of germ cell tumors (2 teratomas and one each of seminoma and embryonal cell carcinoma) and one of 2 thymic hyperplasias had calcium deposit. 5. Tuberculous lymphadenopathies presented as a mass in the retrocaval pretracheal space and hilar region

  17. Applications of a computer model to the analysis of rock-backfill interaction in pillar recovery operations

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, T. J.E. [Dames and Moore, London, England, United Kingdom; Shillabeer, J. H. [Dames and Moore, Toronto (Canada); Herget, G. [CANMET, Ottawa (Canada)

    1980-05-15

    This paper describes the application of a computer model to the analysis of backfill stability in pillar recovery operations with particular reference to two case studies. An explicit finite difference computer program was developed for the purpose of modelling the three-dimensional interaction of rock and backfill in underground excavations. Of particular interest was the mechanics of stress transfer from the rock mass to the pillars and then the backfill. The need, therefore, for a model to allow for the three-dimensional effects and the sequence of operations is evident. The paper gives a brief description of the computer program, descriptions of the mines, the sequences of operations and how they were modelled, and the results of the analyses in graphical form. For both case studies, failure of the backfill was predicted at certain stages. Subsequent reports from the mines indicate that such failures did not occur at the relevant stage. The paper discusses the validity of the model and concludes that the approach accurately represents the principles of rock mechanics in cut-and-fill mining and that further research should be directed towards determining the input parameters to an equal degree of sophistication.

  18. 21st century changes in the surface mass balance of the Greenland ice sheet simulated with the global model CESM

    Science.gov (United States)

    Vizcaíno, M.; Lipscomb, W. H.; Van den Broeke, M.

    2012-04-01

    We present here the first projections of 21st century surface mass balance change of the Greenland ice sheet simulated with the Community Earth System Model (CESM). CESM is a fully-coupled, global climate model developed at many research centers and universities, primarily in the U.S. The model calculates the surface mass balance in the land component (the Community Land Model, CLM), at the same resolution as the atmosphere (1 degree), with an energy-balance scheme. The snow physics included in CLM for non-glaciated surfaces (SNiCAR model, Flanner and Zender, 2005) are used over the ice sheet. The surface mass balance is calculated for 10 elevation classes, and then downscaled to the grid of the ice sheet model (5 km in this case) via vertical linear interpolation between elevation classes combined with horizontal bilinear interpolation. The ice sheet topography is fixed at present-day values for the simulations presented here. The use of elevation classes reduces computational costs while giving results that reproduce well the mass balance gradients at the steep margins of the ice sheet. The simulated present-day surface mass balance agrees well with results from regional models. We focus on the regional model RACMO (Ettema et al. 2009) to compare the results on 20th-century surface mass balance evolution and two-dimensional patterns. The surface mass balance of the ice sheet under RCP8.5. forcing becomes negative in the last decades of the 21st century. The equilibrium line becomes ~500 m higher on average. Accumulation changes are positive in the accumulation zone. We examine changes in refreezing, accumulation, albedo, surface fluxes, and the timing of the melt season.

  19. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  20. Modelling the water mass circulation in the Aegean Sea. Part I: wind stresses, thermal and haline fluxes

    Directory of Open Access Journals (Sweden)

    I. A. Valioulis

    1994-07-01

    Full Text Available The aim of this work is to develop a computer model capable of simulating the water mass circulation in the Aegean Sea. There is historical, phenomenological and recent experimental evidence of important hydrographical features whose causes have been variably identified as the highly complex bathymetry, the extreme seasonal variations in temperature, the considerable fresh water fluxes, and the large gradients in salinity or temperature across neighbouring water masses (Black Sea and Eastern Mediterranean. In the approach taken here, physical processes are introduced into the model one by one. This method reveals the parameters responsible for permanent and seasonal features of the Aegean Sea circulation. In the first part of the work reported herein, wind-induced circulation appears to be seasonally invariant. This yearly pattern is overcome by the inclusion of baroclinicity in the model in the form of surface thermohaline fluxes. The model shows an intricate pattern of sub-basin gyres and locally strong currents, permanent or seasonal, in accord with the experimental evidence.

  1. Modelling the water mass circulation in the Aegean Sea. Part I: wind stresses, thermal and haline fluxes

    Directory of Open Access Journals (Sweden)

    I. A. Valioulis

    Full Text Available The aim of this work is to develop a computer model capable of simulating the water mass circulation in the Aegean Sea. There is historical, phenomenological and recent experimental evidence of important hydrographical features whose causes have been variably identified as the highly complex bathymetry, the extreme seasonal variations in temperature, the considerable fresh water fluxes, and the large gradients in salinity or temperature across neighbouring water masses (Black Sea and Eastern Mediterranean. In the approach taken here, physical processes are introduced into the model one by one. This method reveals the parameters responsible for permanent and seasonal features of the Aegean Sea circulation. In the first part of the work reported herein, wind-induced circulation appears to be seasonally invariant. This yearly pattern is overcome by the inclusion of baroclinicity in the model in the form of surface thermohaline fluxes. The model shows an intricate pattern of sub-basin gyres and locally strong currents, permanent or seasonal, in accord with the experimental evidence.

  2. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  3. Concept of effective atomic number and effective mass density in dual-energy X-ray computed tomography

    International Nuclear Information System (INIS)

    Bonnin, Anne; Duvauchelle, Philippe; Kaftandjian, Valérie; Ponard, Pascal

    2014-01-01

    This paper focuses on dual-energy X-ray computed tomography and especially the decomposition of the measured attenuation coefficient in a mass density and atomic number basis. In particular, the concept of effective atomic number is discussed. Although the atomic number is well defined for chemical elements, the definition of an effective atomic number for any compound is not an easy task. After reviewing different definitions available in literature, a definition related to the method of measurement and X-ray energy, is suggested. A new concept of effective mass density is then introduced in order to characterize material from dual-energy computed tomography. Finally, this new concept and definition are applied on a simulated case, focusing on explosives identification in luggage

  4. Electric solar wind sail mass budget model

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2013-02-01

    Full Text Available The electric solar wind sail (E-sail is a new type of propellantless propulsion system for Solar System transportation, which uses the natural solar wind to produce spacecraft propulsion. The E-sail consists of thin centrifugally stretched tethers that are kept charged by an onboard electron gun and, as such, experience Coulomb drag through the high-speed solar wind plasma stream. This paper discusses a mass breakdown and a performance model for an E-sail spacecraft that hosts a mission-specific payload of prescribed mass. In particular, the model is able to estimate the total spacecraft mass and its propulsive acceleration as a function of various design parameters such as the number of tethers and their length. A number of subsystem masses are calculated assuming existing or near-term E-sail technology. In light of the obtained performance estimates, an E-sail represents a promising propulsion system for a variety of transportation needs in the Solar System.

  5. Mass prophylactic screening of the organized female populaton using the Thermograph-Computer System

    International Nuclear Information System (INIS)

    Vepkhvadze, R.Ya.; Khvedelidze, E.Sh.

    1984-01-01

    Organizational aspects of the Thermograph Computer System usage have been analyzed. It has been shown that results of thermodiagnosis completely coincide with clinical conclusion, but roentrenological method permits to reveal a disease only for 19 patients from 36 ones. It is possible to examine 120 women for the aim of early diagnosis of mammary gland diseases during the day operating hours with the use of the Thermograph Computer System. A movable thermodiagnostic room simultaneoUsly served as an inspection room to discover visual forms of tumor diseases including diseases of cervix uteri and may be used for mass preventive examination of the organized female population

  6. Calculation of the top quark mass in the flipped SU(5)xU(1) superstring model

    Energy Technology Data Exchange (ETDEWEB)

    Leontaris, G.K.; Rizos, J.; Tamvakis, K. (Ioannina Univ. (Greece). Dept. of Physics)

    1990-11-08

    We present a complete renormalization group calculation of the top-quark mass in the SU(5)xU(1) superstring model. We solve the coupled renormalization group equations for the gauge and Yukawa couplings in the two-loop approximation and obtain the top-quark mass as a function of two parameters of the model which could be chosen to be ratios of singlet VEVs associated with the surplus (U(1)){sup 4} breaking. We obtain a heavy top-quark with 150 GeV{le}m{sub t}<200 GeV, for most part of the parameter space, while lower values are possible only in a very small extremal region. We also compute the allowed range of unification parameters (M{sub x}, sin{sup 2}{theta}{sub w}, {alpha}{sub 3}(M{sub W})) in the presence of a heavy top-quark. (orig.).

  7. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  8. Radiative neutrino mass model with degenerate right-handed neutrinos

    International Nuclear Information System (INIS)

    Kashiwase, Shoichi; Suematsu, Daijiro

    2016-01-01

    The radiative neutrino mass model can relate neutrino masses and dark matter at a TeV scale. If we apply this model to thermal leptogenesis, we need to consider resonant leptogenesis at that scale. It requires both finely degenerate masses for the right-handed neutrinos and a tiny neutrino Yukawa coupling. We propose an extension of the model with a U(1) gauge symmetry, in which these conditions are shown to be simultaneously realized through a TeV scale symmetry breaking. Moreover, this extension can bring about a small quartic scalar coupling between the Higgs doublet scalar and an inert doublet scalar which characterizes the radiative neutrino mass generation. It also is the origin of the Z 2 symmetry which guarantees the stability of dark matter. Several assumptions which are independently supposed in the original model are closely connected through this extension. (orig.)

  9. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Mafalda; Seery, David [Astronomy Centre, University of Sussex, Brighton BN1 9QH (United Kingdom); Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk [Department of Theoretical Physics, University of the Basque Country, UPV/EHU, 48040 Bilbao (Spain)

    2015-12-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  10. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    International Nuclear Information System (INIS)

    Dias, Mafalda; Seery, David; Frazer, Jonathan

    2015-01-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development

  11. Summary of the models and methods for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component

  12. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  13. Computational models of airway branching morphogenesis.

    Science.gov (United States)

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  15. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  16. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  17. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  18. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  19. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  20. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  2. Computer model for the recombination zone of a microwave-plasma electrothermal rocket

    Energy Technology Data Exchange (ETDEWEB)

    Filpus, J.W.; Hawley, M.C.

    1987-01-01

    As part of a study of the microwave-plasma electrothermal rocket, a computer model of the flow regime below the plasma has been developed. A second-order model, including axial dispersion of energy and material and boundary conditions at infinite length, was developed to partially reproduce the absence of mass-flow rate dependence that was seen in experimental temperature profiles. To solve the equations of the model, a search technique was developed to find the initial derivatives. On integrating with a trial set of initial derivatives, the values and their derivatives were checked to judge whether the values were likely to attain values outside the practical regime, and hence, the boundary conditions at infinity were likely to be violated. Results are presented and directions for further development are suggested. 17 references.

  3. Computer codes for three dimensional mass transport with non-linear sorption

    International Nuclear Information System (INIS)

    Noy, D.J.

    1985-03-01

    The report describes the mathematical background and data input to finite element programs for three dimensional mass transport in a porous medium. The transport equations are developed and sorption processes are included in a general way so that non-linear equilibrium relations can be introduced. The programs are described and a guide given to the construction of the required input data sets. Concluding remarks indicate that the calculations require substantial computer resources and suggest that comprehensive preliminary analysis with lower dimensional codes would be important in the assessment of field data. (author)

  4. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  5. Third generation masses from a two Higgs model fixed point

    International Nuclear Information System (INIS)

    Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.

    1990-01-01

    The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)

  6. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  7. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  8. Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code

    International Nuclear Information System (INIS)

    Croff, A.G.; Bjerke, M.A.; Morrison, G.W.; Petrie, L.M.

    1978-09-01

    Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given

  9. Bone mass determination from microradiographs by computer-assisted videodensitometry. Pt. 2

    International Nuclear Information System (INIS)

    Kaelebo, P.; Strid, K.G.

    1988-01-01

    Aluminium was evaluated as a reference substance in the assessment of rabbit cortical bone by microradiography followed by videodensitometry. Ten dense, cortical-bone specimens from the same tibia diaphysis were microradiographed using prefiltered 27 kV roentgen radiation together with aluminium step wedges and bone simulating phantoms for calibration. Optimally exposed and processed plates were analysed by previously described computer-assisted videodensitometry. For comparison, the specimens were analysed by physico-chemical methods. A strict proportionality was found between the 'aluminium equivalent mass' and the ash weight of the specimens. The total random error was low with a coefficient of variation within 1.5 per cent. It was concluded that aluminium is an appropriate reference material in the determination of cortical bone, which it resembles in effective atomic number and thus X-ray attenuation characteristics. The 'aluminium equivalent mass' is suitably established as the standard of expressing the results of bone assessment by microradiography. (orig.)

  10. TBA equations for the mass gap in the O(2r) non-linear σ-models

    International Nuclear Information System (INIS)

    Balog, Janos; Hegedues, Arpad

    2005-01-01

    We propose TBA integral equations for 1-particle states in the O(n) non-linear σ-model for even n. The equations are conjectured on the basis of the analytic properties of the large volume asymptotics of the problem, which is explicitly constructed starting from Luscher's asymptotic formula. For small volumes the mass gap values computed numerically from the TBA equations agree very well with results of three-loop perturbation theory calculations, providing support for the validity of the proposed TBA system

  11. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  12. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  14. Hubble induced mass after inflation in spectator field models

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Tomohiro [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature and the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.

  15. Computer simulation of cascade damage in iron: PKA mass effects

    International Nuclear Information System (INIS)

    Calder, A.; Bacon, D.J.; Barashev, A.; Osetsky, Y.

    2007-01-01

    Full text of publication follows: Results are presented from an extensive series of computer simulations of the damage created by displacement cascades in alpha-iron. The objective has been to determine for the first time the effect of the mass of the primary knock-on atom (PKA) on defect number, defect clustering and cluster morphology. Cascades with PKA energy in the range 5 to 20 keV have been simulated by molecular dynamics for temperature up to 600 K using an interatomic potential for iron for which the energy difference between the dumbbell interstitial and the crowdion is close to the value from ab initio calculation (Ackland et al., J. Phys.: Condens. Matter 2004). At least 30 cascades have been simulated for each condition in order to generate reasonable statistics. The influence of PKA species on damage has been investigated in two ways. In one, the PKA atom was treated as an Fe atom as far as its interaction with other atoms was concerned, but its atomic weight (in amu) was either 12 (C), 56 (Fe) or 209 (Bi). Pairs of Bi PKAs have also been used to mimic heavy molecular ion irradiation. In the other approach, the short-range pair part of the interatomic potential was changed from Fe-Fe to that for Bi-Fe, either with or without a change of PKA mass, in order to study the influence of high-energy collisions on the cascade outcome. It is found that PKA mass is more influential than the interatomic potential between the PKA and Fe atoms. At low cascade energy (5-10 keV), increasing PKA mass leads to a decrease in number of interstitials and vacancies. At high energy (20 keV), the main effect of increasing mass is to increase the probability of creation of interstitial and vacancy clusters in the form of 1/2 and dislocation loops. The simulation results are consistent with experimental TEM observations of damage in irradiated iron. (authors)

  16. Editorial: Modelling and computational challenges in granular materials

    OpenAIRE

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss the current progress and latest advancements in the field of advanced numerical methods and modelling of granular materials. The focus will be on computational methods, improved algorithms and the m...

  17. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  18. Exact Mass-Coupling Relation for the Homogeneous Sine-Gordon Model.

    Science.gov (United States)

    Bajnok, Zoltán; Balog, János; Ito, Katsushi; Satoh, Yuji; Tóth, Gábor Zsolt

    2016-05-06

    We derive the exact mass-coupling relation of the simplest multiscale quantum integrable model, i.e., the homogeneous sine-Gordon model with two mass scales. The relation is obtained by comparing the perturbed conformal field theory description of the model valid at short distances to the large distance bootstrap description based on the model's integrability. In particular, we find a differential equation for the relation by constructing conserved tensor currents, which satisfy a generalization of the Θ sum rule Ward identity. The mass-coupling relation is written in terms of hypergeometric functions.

  19. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  20. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  1. Neutrino mass in flavor dependent gauged lepton model

    Science.gov (United States)

    Nomura, Takaaki; Okada, Hiroshi

    2018-03-01

    We study a neutrino model introducing an additional nontrivial gauged lepton symmetry where the neutrino masses are induced at two-loop level, while the first and second charged-leptons of the standard model are done at one-loop level. As a result of the model structure, we can predict one massless active neutrino, and there is a dark matter candidate. Then we discuss the neutrino mass matrix, muon anomalous magnetic moment, lepton flavor violations, oblique parameters, and relic density of dark matter, taking into account the experimental constraints.

  2. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  3. Angular momentum redistribution by spiral waves in computer models of disc galaxies

    International Nuclear Information System (INIS)

    Sellwood, J.A.; James, R.A.

    1979-01-01

    It is shown that the spiral patterns which develop spontaneously in computer models of galaxies are generated through angular momentum transfer. By adjusting the distribution of mass in the rigid halo components of the models it is possible to alter radically the rotation curve of the disc component. Either trailing or leading spiral arms develop in the models, dependent only on the sense of the differential shear; no spirals are seen in models where the disc rotates uniformly. It is found that the distribution of angular momentum in the disc is altered by the spiral evolution. Although some spiral structure can be seen for a long period, the life of each pattern is very short. It is shown that resonances are of major importance even for these transient patterns. All spiral wave patterns which have been seen possess both an inner Lindblad resonance and a co-rotation resonance. (author)

  4. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  5. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  6. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...

  7. A computer simulation model to compute the radiation transfer of mountainous regions

    Science.gov (United States)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  8. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  9. Model-independent X-ray Mass Determinations for Clusters of Galaxies

    Science.gov (United States)

    Nulsen, Paul

    2005-09-01

    We propose to use high quality X-ray data from the Chandra archive to determine the mass distributions of about 60 clusters of galaxies over the largest possible range of radii. By avoiding unwarranted assumptions, model-independent methods make best use of high quality data. We will employ two model-independent methods. That used by Nulsen & Boehringer (1995) to determine the mass of the Virgo Cluster and a new method, that will be developed as part of the project. The new method will fit a general mass model directly to the X-ray spectra, making best possible use of the fitting errors to constrain mass profiles.

  10. Trust models in ubiquitous computing.

    Science.gov (United States)

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  11. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  12. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  13. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  14. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  15. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  16. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  17. An open-source library for the numerical modeling of mass-transfer in solid oxide fuel cells

    Science.gov (United States)

    Novaresio, Valerio; García-Camprubí, María; Izquierdo, Salvador; Asinari, Pietro; Fueyo, Norberto

    2012-01-01

    The generation of direct current electricity using solid oxide fuel cells (SOFCs) involves several interplaying transport phenomena. Their simulation is crucial for the design and optimization of reliable and competitive equipment, and for the eventual market deployment of this technology. An open-source library for the computational modeling of mass-transport phenomena in SOFCs is presented in this article. It includes several multicomponent mass-transport models ( i.e. Fickian, Stefan-Maxwell and Dusty Gas Model), which can be applied both within porous media and in porosity-free domains, and several diffusivity models for gases. The library has been developed for its use with OpenFOAM ®, a widespread open-source code for fluid and continuum mechanics. The library can be used to model any fluid flow configuration involving multicomponent transport phenomena and it is validated in this paper against the analytical solution of one-dimensional test cases. In addition, it is applied for the simulation of a real SOFC and further validated using experimental data. Program summaryProgram title: multiSpeciesTransportModels Catalogue identifier: AEKB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 140 No. of bytes in distributed program, including test data, etc.: 64 285 Distribution format: tar.gz Programming language:: C++ Computer: Any x86 (the instructions reported in the paper consider only the 64 bit case for the sake of simplicity) Operating system: Generic Linux (the instructions reported in the paper consider only the open-source Ubuntu distribution for the sake of simplicity) Classification: 12 External routines: OpenFOAM® (version 1.6-ext) ( http://www.extend-project.de) Nature of problem: This software provides a library of models for

  18. Modeling soft factors in computer-based wargames

    Science.gov (United States)

    Alexander, Steven M.; Ross, David O.; Vinarskai, Jonathan S.; Farr, Steven D.

    2002-07-01

    Computer-based wargames have seen much improvement in recent years due to rapid increases in computing power. Because these games have been developed for the entertainment industry, most of these advances have centered on the graphics, sound, and user interfaces integrated into these wargames with less attention paid to the game's fidelity. However, for a wargame to be useful to the military, it must closely approximate as many of the elements of war as possible. Among the elements that are typically not modeled or are poorly modeled in nearly all military computer-based wargames are systematic effects, command and control, intelligence, morale, training, and other human and political factors. These aspects of war, with the possible exception of systematic effects, are individually modeled quite well in many board-based commercial wargames. The work described in this paper focuses on incorporating these elements from the board-based games into a computer-based wargame. This paper will also address the modeling and simulation of the systemic paralysis of an adversary that is implied by the concept of Effects Based Operations (EBO). Combining the fidelity of current commercial board wargames with the speed, ease of use, and advanced visualization of the computer can significantly improve the effectiveness of military decision making and education. Once in place, the process of converting board wargames concepts to computer wargames will allow the infusion of soft factors into military training and planning.

  19. Higgs-boson contributions to gauge-boson mass shifts in extended electroweak models

    International Nuclear Information System (INIS)

    Moore, S.R.

    1985-10-01

    In the minimal standard model, the difference between the tree-level and one-loop-corrected predictions for the gauge-boson masses, known as the mass shifts, are of the order of 4%. The dominant contribution is from light-fermion loops. The Higgs-dependent terms are small, even if the Higgs boson is heavy. We have analyzed the mass shifts for models with a more complicated Higgs sector. We use the on-shell renormalization scheme, in which the parameters of the theory are the physical masses and couplings. We have considered the 2-doublet, n-doublet, triplet and doublet-triplet models. We have found that the Z-boson mass prediction has a strong dependence on the charged-Higgs mass. In the limit that the charged Higgs is much heavier than the gauge bosons, the Higgs-dependent terms become significant, and may even cancel the light-fermion terms. In the models with a Higgs triplet, there is also a strong dependence on the neutral-Higgs masses, although this contribution tends to be suppressed in realistic models. The W-boson mass shift does not have a strong Higgs dependence. If we use the Z mass as input in determining the parameters of the theory, a scenario which will become attractive as the mass of the Z is accurately measured in the next few years, we find that the W-boson mass shift exhibits the same sort of behavior, differing from the minimal model for the case of the charged Higgs being heavy. We have found that when radiative corrections are taken into account, models with extended Higgs sectors may differ significantly from the minimal standard model in their predictions for the gauge-boson masses. Thus, an accurate measurement of the masses will help shed light on the structure of the Higgs sector. 68 refs

  20. CONSTRAINTS ON THE RELATIONSHIP BETWEEN STELLAR MASS AND HALO MASS AT LOW AND HIGH REDSHIFT

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Somerville, Rachel S.; Maulbetsch, Christian; Van den Bosch, Frank C.; Maccio, Andrea V.; Naab, Thorsten; Oser, Ludwig

    2010-01-01

    We use a statistical approach to determine the relationship between the stellar masses of galaxies and the masses of the dark matter halos in which they reside. We obtain a parameterized stellar-to-halo mass (SHM) relation by populating halos and subhalos in an N-body simulation with galaxies and requiring that the observed stellar mass function be reproduced. We find good agreement with constraints from galaxy-galaxy lensing and predictions of semi-analytic models. Using this mapping, and the positions of the halos and subhalos obtained from the simulation, we find that our model predictions for the galaxy two-point correlation function (CF) as a function of stellar mass are in excellent agreement with the observed clustering properties in the Sloan Digital Sky Survey at z = 0. We show that the clustering data do not provide additional strong constraints on the SHM function and conclude that our model can therefore predict clustering as a function of stellar mass. We compute the conditional mass function, which yields the average number of galaxies with stellar masses in the range m ± dm/2 that reside in a halo of mass M. We study the redshift dependence of the SHM relation and show that, for low-mass halos, the SHM ratio is lower at higher redshift. The derived SHM relation is used to predict the stellar mass dependent galaxy CF and bias at high redshift. Our model predicts that not only are massive galaxies more biased than low-mass galaxies at all redshifts, but also the bias increases more rapidly with increasing redshift for massive galaxies than for low-mass ones. We present convenient fitting functions for the SHM relation as a function of redshift, the conditional mass function, and the bias as a function of stellar mass and redshift.

  1. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  2. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  3. Segmentation and Estimation of the Histological Composition of the Tumor Mass in Computed Tomographic Images of Neuroblastoma

    National Research Council Canada - National Science Library

    Ayres, Fabio

    2001-01-01

    The problem that we investigate in the present paper Is the improvement of the analysis of the primary tumor mass, in patients with advanced neuroblastoma, using X-ray computed tomography (CT) exams...

  4. Shear viscosity of liquid mixtures: Mass dependence

    International Nuclear Information System (INIS)

    Kaushal, Rohan; Tankeshwar, K.

    2002-06-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model. (author)

  5. Shear viscosity of liquid mixtures Mass dependence

    CERN Document Server

    Kaushal, R

    2002-01-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model.

  6. Computer-aided diagnosis of mammographic masses using geometric verification-based image retrieval

    Science.gov (United States)

    Li, Qingliang; Shi, Weili; Yang, Huamin; Zhang, Huimao; Li, Guoxin; Chen, Tao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Computer-Aided Diagnosis of masses in mammograms is an important indicator of breast cancer. The use of retrieval systems in breast examination is increasing gradually. In this respect, the method of exploiting the vocabulary tree framework and the inverted file in the mammographic masse retrieval have been proved high accuracy and excellent scalability. However it just considered the features in each image as a visual word and had ignored the spatial configurations of features. It greatly affect the retrieval performance. To overcome this drawback, we introduce the geometric verification method to retrieval in mammographic masses. First of all, we obtain corresponding match features based on the vocabulary tree framework and the inverted file. After that, we grasps the main point of local similarity characteristic of deformations in the local regions by constructing the circle regions of corresponding pairs. Meanwhile we segment the circle to express the geometric relationship of local matches in the area and generate the spatial encoding strictly. Finally we judge whether the matched features are correct or not, based on verifying the all spatial encoding are whether satisfied the geometric consistency. Experiments show the promising results of our approach.

  7. Computational analysis of coupled fluid, heat, and mass transport in ferrocyanide single-shell tanks: FY 1994 interim report. Ferrocyanide Tank Safety Project

    International Nuclear Information System (INIS)

    McGrail, B.P.

    1994-11-01

    A computer modeling study was conducted to determine whether natural convection processes in single-shell tanks containing ferrocyanide wastes could generate localized precipitation zones that significantly concentrate the major heat-generating radionuclide, 137 Cs. A computer code was developed that simulates coupled fluid, heat, and single-species mass transport on a regular, orthogonal finite-difference grid. The analysis showed that development of a ''hot spot'' is critically dependent on the temperature dependence for the solubility of Cs 2 NiFe(CN) 6 or CsNaNiFe(CN) 6 . For the normal case, where solubility increases with increasing temperature, the net effect of fluid flow, heat, and mass transport is to disperse any local zones of high heat generation rate. As a result, hot spots cannot physically develop for this case. However, assuming a retrograde solubility dependence, the simulations indicate the formation of localized deposition zones that concentrate the 137 Cs near the bottom center of the tank where the temperatures are highest. Recent experimental studies suggest that Cs 2 NiFe(CN) 6 (c) does not exhibit retrograde solubility over the temperature range 25 degree C to 90 degree C and NaOH concentrations to 5 M. Assuming these preliminary results are confirmed, no natural mass transport process exists for generating a hot spot in the ferrocyanide single-shell tanks

  8. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  9. Calibration of a surface mass balance model for global-scale applications

    NARCIS (Netherlands)

    Giesen, R. H.; Oerlemans, J.

    2012-01-01

    Global applications of surface mass balance models have large uncertainties, as a result of poor climate input data and limited availability of mass balance measurements. This study addresses several possible consequences of these limitations for the modelled mass balance. This is done by applying a

  10. A measurement-based X-ray source model characterization for CT dosimetry computations.

    Science.gov (United States)

    Sommerville, Mitchell; Poirier, Yannick; Tambasco, Mauro

    2015-11-08

    The purpose of this study was to show that the nominal peak tube voltage potential (kVp) and measured half-value layer (HVL) can be used to generate energy spectra and fluence profiles for characterizing a computed tomography (CT) X-ray source, and to validate the source model and an in-house kV X-ray dose computation algorithm (kVDoseCalc) for computing machine- and patient-specific CT dose. Spatial variation of the X-ray source spectra of a Philips Brilliance and a GE Optima Big Bore CT scanner were found by measuring the HVL along the direction of the internal bow-tie filter axes. Third-party software, Spektr, and the nominal kVp settings were used to generate the energy spectra. Beam fluence was calculated by dividing the integral product of the spectra and the in-air NIST mass-energy attenuation coefficients by in-air dose measurements along the filter axis. The authors found the optimal number of photons to seed in kVDoseCalc to achieve dose convergence. The Philips Brilliance beams were modeled for 90, 120, and 140 kVp tube settings. The GE Optima beams were modeled for 80, 100, 120, and 140 kVp tube settings. Relative doses measured using a Capintec Farmer-type ionization chamber (0.65 cc) placed in a cylindrical polymethyl methacrylate (PMMA) phantom and irradiated by the Philips Brilliance, were compared to those computed with kVDoseCalc. Relative doses in an anthropomorphic thorax phantom (E2E SBRT Phantom) irradiated by the GE Optima were measured using a (0.015 cc) PTW Freiburg ionization chamber and compared to computations from kVDoseCalc. The number of photons required to reduce the average statistical uncertainty in dose to measurement over all 12 PMMA phantom positions was found to be 1.44%, 1.47%, and 1.41% for 90, 120, and 140 kVp, respectively. The maximum percent difference between calculation and measurement for all energies, measurement positions, and phantoms was less than 3.50%. Thirty-five out of a total of 36 simulation conditions were

  11. Computational and Simulation Modeling of Political Attitudes: The 'Tiger' Area of Political Culture Research

    Directory of Open Access Journals (Sweden)

    Voinea, Camelia Florela

    2016-01-01

    Full Text Available In almost one century long history, political attitudes modeling research has accumulated a critical mass of theory and method. Its characteristics and particularities have often suggested that political attitude approach to political persuasion modeling reveals a strong theoretical autonomy of concept which entitles it to become a new separate discipline of research. Though this did not actually happen, political attitudes modeling research has remained the most challenging area – the “tiger” – of political culture modeling research. This paper reviews the research literature on the conceptual, computational and simulation modeling of political attitudes developed starting with the beginning of the 20th century until the present times. Several computational and simulation modeling paradigms have provided support to political attitudes modeling research. These paradigms and the shift from one to another are briefly presented for a period of time of almost one century. The dominant paradigmatic views are those inspired by the Newtonian mechanics, and those based on the principle of methodological individualism and the emergence of macro phenomena from the individual interactions at the micro level of a society. This period of time is divided in eight ages covering the history of ideas in a wide range of political domains, going from political attitudes to polity modeling. Internal and external pressures for paradigmatic change are briefly explained.

  12. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  13. Dependence of X-Ray Burst Models on Nuclear Masses

    Energy Technology Data Exchange (ETDEWEB)

    Schatz, H.; Ong, W.-J. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States)

    2017-08-01

    X-ray burst model predictions of light curves and the final composition of the nuclear ashes are affected by uncertain nuclear masses. However, not all of these masses are determined experimentally with sufficient accuracy. Here we identify the remaining nuclear mass uncertainties in X-ray burst models using a one-zone model that takes into account the changes in temperature and density evolution caused by changes in the nuclear physics. Two types of bursts are investigated—a typical mixed H/He burst with a limited rapid proton capture process (rp-process) and an extreme mixed H/He burst with an extended rp-process. When allowing for a 3 σ variation, only three remaining nuclear mass uncertainties affect the light-curve predictions of a typical H/He burst ({sup 27}P, {sup 61}Ga, and {sup 65}As), and only three additional masses affect the composition strongly ({sup 80}Zr, {sup 81}Zr, and {sup 82}Nb). A larger number of mass uncertainties remain to be addressed for the extreme H/He burst, with the most important being {sup 58}Zn, {sup 61}Ga, {sup 62}Ge, {sup 65}As, {sup 66}Se, {sup 78}Y, {sup 79}Y, {sup 79}Zr, {sup 80}Zr, {sup 81}Zr, {sup 82}Zr, {sup 82}Nb, {sup 83}Nb, {sup 86}Tc, {sup 91}Rh, {sup 95}Ag, {sup 98}Cd, {sup 99}In, {sup 100}In, and {sup 101}In. The smallest mass uncertainty that still impacts composition significantly when varied by 3 σ is {sup 85}Mo with 16 keV uncertainty. For one of the identified masses, {sup 27}P, we use the isobaric mass multiplet equation to improve the mass uncertainty, obtaining an atomic mass excess of −716(7) keV. The results provide a roadmap for future experiments at advanced rare isotope beam facilities, where all the identified nuclides are expected to be within reach for precision mass measurements.

  14. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  15. Applying an orographic precipitation model to improve mass balance modeling of the Juneau Icefield, AK

    Science.gov (United States)

    Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.

    2017-12-01

    Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with

  16. Nucleon structure by Lattice QCD computations with twisted mass fermions

    International Nuclear Information System (INIS)

    Harraud, P.A.

    2010-11-01

    Understanding the structure of the nucleon from quantum chromodynamics (QCD) is one of the greatest challenges of hadronic physics. Only lattice QCD allows to determine numerically the values of the observables from ab-initio principles. This thesis aims to study the nucleon form factors and the first moments of partons distribution functions by using a discretized action with twisted mass fermions. As main advantage, the discretization effects are suppressed at first order in the lattice spacing. In addition, the set of simulations allows a good control of the systematical errors. After reviewing the computation techniques, the results obtained for a wide range of parameters are presented, with lattice spacings varying from 0.0056 fm to 0.089 fm, spatial volumes from 2.1 up to 2.7 fm and several pion masses in the range of 260-470 MeV. The vector renormalization constant was determined in the nucleon sector with improved precision. Concerning the electric charge radius, we found a finite volume effect that provides a key towards an explanation of the chiral dependence of the physical point. The results for the magnetic moment, the axial charge, the magnetic and axial charge radii, the momentum and spin fractions carried by the quarks show no dependence on the lattice spacing nor volume. In our range of pion masses, their values show a deviation from the experimental values. Their chiral behaviour do not exhibit the curvature predicted by the chiral perturbation theory which could explain the apparent discrepancy. (author)

  17. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  18. Improved mammographic interpretation of masses using computer-aided diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Leichter, I. [Dept. of Electro-Optics, Jerusalem College of Technology (Israel); Fields, S.; Novak, B. [Dept. of Radiology, Hadassah University Hospital, Mt. Scopus Jerusalem (Israel); Nirel, R. [Dept. of Statistics, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem (Israel); Bamberger, P. [Dept. of Electronics, Jerusalem College of Technology, Jerusalem (Israel); Lederman, R. [Department of Radiology, Hadassah University Hospital, Ein Kerem, Jerusalem (Israel); Buchbinder, S. [Department of Radiology, Montefiore Medical Center, University Hospital for the Albert Einstein College of Medicine, Bronx, New York (United States)

    2000-02-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p < 0.005) was found between features extracted by the CAD system for benign and malignant cases. Specificity of the CAD-assisted diagnosis improved significantly (p < 0.02) from 14 % for the conventional assessment to 50 %, and the positive predictive value increased from 0.47 to 0.62 (p < 0.04). The area under the ROC curve (A{sub z}) increased significantly (p < 0.001) from 0.66 for the conventional assessment to 0.81 for the CAD-assisted analysis. The A{sub z} for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that

  19. The quark mass spectrum in the Universal Seesaw model

    International Nuclear Information System (INIS)

    Ranfone, S.

    1993-03-01

    In the context of a Universal Seesaw model implemented in a left-right symmetric theory, we show that, by allowing the two left-handed doublet Higgs fields to develop different vacuum-expectation-values (VEV's), it is possible to account for the observed structure of the quark mass spectrum without the need of any hierarchy among the Yukawa couplings. In this framework the top-quark mass is expected to be of the order of its present experimental lower bound, m t ≅ 90 to 100 GeV. Moreover, we find that, while one of the Higgs doublets gets essentially the standard model VEV of approximately 250 GeV, the second doublet is expected to have a much smaller VEV, of order 10 GeV. The identification of the large mass scale of the model with the Peccei-Quinn scale fixes the mass of the right-handed gauge bosons in the range 10 7 to 10 10 GeV, far beyond the reach of present collider experiments. (author)

  20. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  1. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  2. The simultaneous mass and energy evaporation (SM2E) model.

    Science.gov (United States)

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  3. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  4. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  5. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  6. Statistical Texture Model for mass Detection in Mammography

    Directory of Open Access Journals (Sweden)

    Nicolás Gallego-Ortiz

    2013-12-01

    Full Text Available In the context of image processing algorithms for mass detection in mammography, texture is a key feature to be used to distinguish abnormal tissue from normal tissue. Recently, a texture model based on a multivariate Gaussian mixture was proposed, of which the parameters are learned in an unsupervised way from the pixel intensities of images. The model produces images that are probabilistic maps of texture normality and it was proposed as a visualization aid for diagnostic by clinical experts. In this paper, the usability of the model is studied for automatic mass detection. A segmentation strategy is proposed and evaluated using 79 mammography cases.

  7. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  8. Climate Ocean Modeling on Parallel Computers

    Science.gov (United States)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  9. Computational Modeling in Liver Surgery

    Directory of Open Access Journals (Sweden)

    Bruno Christ

    2017-11-01

    Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.

  10. The anatomy of the simplest Duflo-Zuker mass formula

    International Nuclear Information System (INIS)

    Mendoza-Temis, Joel; Hirsch, Jorge G.; Zuker, Andres P.

    2010-01-01

    The simplest version of the Duflo-Zuker mass model (due entirely to the late Jean Duflo) is described by following step by step the published computer code. The model contains six macroscopic monopole terms leading asymptotically to a Liquid Drop form, three microscopic terms supposed to mock configuration mixing (multipole) corrections to the monopole shell effects, and one term in charge of detecting deformed nuclei and calculating their masses. A careful analysis of the model suggests a program of future developments that includes a complementary approach to masses based on an independently determined monopole Hamiltonian, a better description of deformations and specific suggestions for the treatment of three body forces.

  11. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  12. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  13. Computed tomography of pediatric abdominal masses

    Energy Technology Data Exchange (ETDEWEB)

    Kook, Shin Ho; Ko, Eun Joo; Chung, Eun Chul; Suh, Jung Soo; Rhee, Chung Sik [College of Medicine, Ewha Womans University, Seoul (Korea, Republic of)

    1988-02-15

    Ultrasonography is a very useful diagnostic modality for evaluation of the pediatric abdominal masses, due to faster, cheaper, and no radiation hazard than CT. But CT has more advantages in assessing precise anatomic location, and extent of the pathologic process, and also has particular value in defining the size, relation of the mass to surrounding organs and detection of lymphadenopathy. We analyzed CT features of 35 cases of pathologically proven pediatric abdominal masses for recent 2 years at Ewha Woman's University Hospital. The results were as follows: 1.The most common originating site was kidney (20 cases, 57.1%); followed by gastrointestinal (5 cases, 14.3%), nonrenal retroperitoneal (4 cases, 11.4%), hepatobiliary (3 cases, 8.6%), and genital (3 cases, 8.6%) in order of frequency. 2.The most common mass was hydronephrosis (11 cases, 31.4%), Wilms' tumor (7 cases, 20.0%), neuroblastoma, choledochal cyst, periappendiceal abscess (3 cases, 8.6%, respectively), ovarian cyst (2 cases, 5.7%) were next in order of frequency. 3.Male to female ratio was 4:5 and choledochal cyst and ovarian cyst were found only in females. The most prevalent age group was 1-3 year old (12 cases, 34.3%). 4.With CT, the diagnosis of hydronephrosis was easy in all cases and could evaluate of its severity, renal function and obstruction site with high accuracy. 5.Wilms' tumor and neuroblastoma were relatively well differentiated by their characteristic CT features; such as location, shape, margin, middle cross, calyceal appearance and calcification, etc. 6.Ovarian and mensentric cysts had similar CT appearance. 7.In other pediatric abdominal masses, CT provided excellent information about anatomic detail, precise extent of tumor and differential diagnostic findings. So, CT is useful imaging modality for the demonstration and diagnosis of abdominal mass lesions in pediatric patients.

  14. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  15. Mass renormalization in sine-Gordon model

    International Nuclear Information System (INIS)

    Xu Bowei; Zhang Yumei

    1991-09-01

    With a general gaussian wave functional, we investigate the mass renormalization in the sine-Gordon model. At the phase transition point, the sine-Gordon system tends to a system of massless free bosons which possesses conformal symmetry. (author). 8 refs, 1 fig

  16. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  17. Peculiarities of constructing the models of mass religious communication

    Directory of Open Access Journals (Sweden)

    Petrushkevych Maria Stefanivna

    2017-07-01

    Full Text Available Religious communication is a full-fledged, effective part of the mass information field. It uses new media to fulfil its needs. And it also functions in the field of mass culture and the information society. To describe the features of mass religious communication in the article, the author constructs a graphic model of its functioning.

  18. Mass transfer models analysis for the structured packings

    International Nuclear Information System (INIS)

    Suastegui R, A.O.

    1997-01-01

    The models that have been developing, to understand the mechanism of the mass transfer through the structured packings, present limitations for their application, existing then uncertainty in order to use them in the chemical industrial processes. In this study the main parameters used in the mass transfer are: the hydrodynamic of the bed of the column, the geometry of the bed, physical-chemical properties of the mixture and the flow regime of the operation between the flows liquid-gas. The sensibility of each one of these parameters generate an arduous work to develop right proposals and good interpretation of the phenomenon. With the purpose of showing the importance of these parameters mentioned in the mass transfer, this work is analyzed the process of absorption for the system water-air, using the models to the structured packings in packed columns. The models selected were developed by Bravo and collaborators in 1985 and 1992, in order to determine the parameters previous mentioned for the system water-air, using a structured packing built in the National Institute of Nuclear Research. In this work is showed the results of the models application and their discussion. (Author)

  19. Higgs-boson contributions to gauge-boson mass shifts in extended electroweak models

    International Nuclear Information System (INIS)

    Moore, S.R.

    1985-01-01

    The author analyzed the mass shifts for models with a more complicated Higgs sector. He uses the on-shell renormalization scheme, in which the parameters of the theory are the physical masses and couplings. The author has considered the 2-doublet, n-doublet, triplet and doublet-triplet models. He has found that the Z-boson mass prediction has a strong dependence on the charged-Higgs mass. In the limit that the charged Higgs is much heavier than the gauge bosons, the Higgs-dependent terms become significant, and may even cancel the light-fermion terms. If the author uses the Z mass as input in determining the parameters of the theory, a scenario which will become attractive as the mass of the Z is accurately measured in the next few years, it is found that the W-boson mass shift exhibits the same sort of behavior, differing from the minimal model for the case of the charged Higgs being heavy. The author has found that when the radiative corrections are taken into account, models with extended Higgs sectors may differ significantly from the minimal standard model in this predictions for the gauge-boson masses. Thus, an accurate measurement of the masses will help shed light on the structure of the Higgs sector

  20. Significance of computed tomography in the diagnosis of the mediastinal mass lesions

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Masanori; Takashima, Tsutomu; Suzuki, Masayuki; Itoh, Hiroshi; Hirose, Jinichiro; Choto, Shuichi (Kanazawa Univ. (Japan). School of Medicine)

    1983-08-01

    Thirty cases of the mediastinal mass lesions were examined by computed tomography and diagnostic ability of CT was retrospectively evaluated. We devided them into two major groups: cystic and solid lesions. Cysts and cystic teratomas were differentiated on the thickness of their wall. Pericardial cysts were typically present at the cardiophrenic angle. In the solid mediastinal lesions, the presence of calcific and/or fatty components, the presence of necrosis, the irregularity of the margin and the obliteration of the surrounding fat layer were the clues to differential diagnosis and of evaluation for their invasiveness. Although differential diagnosis of the solid anterior mediastinal tumors was often difficult, teratomas with calcific and fatty componets were easily diagnosed. Invasiveness of the malignant thymoma and other malignant lesions were successfully evaluated to some extent. Neurogenic posterior mediastinal tumors were easily diagnosed because of the presence of the spine deformity and typical dumbbell shaped appearance. We stress that our diagnostic approach is useful to differentiate the mediastinal mass lesions.

  1. Significance of computed tomography in the diagnosis of the mediastinal mass lesions

    International Nuclear Information System (INIS)

    Kimura, Masanori; Takashima, Tsutomu; Suzuki, Masayuki; Itoh, Hiroshi; Hirose, Jinichiro; Choto, Shuichi

    1983-01-01

    Thirty cases of the mediastinal mass lesions were examined by computed tomography and diagnostic ability of CT was retrospectively evaluated. We devided them into two major groups: cystic and solid lesions. Cysts and cystic teratomas were differentiated on the thickness of their wall. Pericardial cysts were typically present at the cardiophrenic angle. In the solid mediastinal lesions, the presence of calcific and/or fatty components, the presence of necrosis, the irregularity of the margin and the obliteration of the surrounding fat layer were the clues to differential diagnosis and of evaluation for their invasiveness. Although differential diagnosis of the solid anterior mediastinal tumors was often difficult, teratomas with calcific and fatty componets were easily diagnosed. Invasiveness of the malignant thymoma and other malignant lesions were successfully evaluated to some extent. Neurogenic posterior mediastinal tumors were easily diagnosed because of the presence of the spine deformity and typical dumbbell shaped appearance. We stress that our diagnostic approach is useful to differentiate the mediastinal mass lesions. (author)

  2. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    Science.gov (United States)

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  3. Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

    Science.gov (United States)

    Ditmar, P.; Hashemi Farahani, H.; Klees, R.

    2011-12-01

    Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under

  4. Deployment Models: Towards Eliminating Security Concerns From Cloud Computing

    OpenAIRE

    Zhao, Gansen; Chunming, Rong; Jaatun, Martin Gilje; Sandnes, Frode Eika

    2010-01-01

    Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment. D...

  5. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  7. A stochastic model for the probability of malaria extinction by mass drug administration.

    Science.gov (United States)

    Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A

    2017-09-18

    Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c  95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.

  8. Life system modeling and intelligent computing. Pt. I. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part I of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 55 papers in this volume are organized in topical sections on intelligent modeling, monitoring, and control of complex nonlinear systems; autonomy-oriented computing and intelligent agents; advanced theory and methodology in fuzzy systems and soft computing; computational intelligence in utilization of clean and renewable energy resources; intelligent modeling, control and supervision for energy saving and pollution reduction; intelligent methods in developing vehicles, engines and equipments; computational methods and intelligence in modeling genetic and biochemical networks and regulation. (orig.)

  9. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  10. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  11. Searches of exotic Higgs bosons in general mass spectra of the Georgi-Machacek model at the LHC

    International Nuclear Information System (INIS)

    Chiang, Cheng-Wei; Kuo, An-Li; Yamada, Toshifumi

    2016-01-01

    We derive the most general sets of viable mass spectra of the exotic Higgs bosons in the Georgi-Machacek model that are consistent with the theoretical constraints of vacuum stability and perturbative unitarity and the experimental constraints of electroweak precision observables, Zbb̄ coupling and Higgs boson signal strengths. Branching ratios of various cascade decay channels of the doubly-charged Higgs boson in the 5 representation, the singly-charged Higgs boson in 3, and the singlet Higgs boson are further computed. As one of the most promising channels for discovering the model, we study the prospects for detecting the doubly-charged Higgs boson that is produced via the vector boson fusion process and decays into final states containing a pair of same-sign leptons at the 14-TeV LHC and a 100-TeV future pp collider. For this purpose, we evaluate acceptance times efficiency for signals of the doubly-charged Higgs boson with general viable mass spectra and compare it with the standard model background estimates.

  12. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  13. Fermion masses in potential models of chiral symmetry breaking

    International Nuclear Information System (INIS)

    Jaroszewicz, T.

    1983-01-01

    A class of models of spontaneous chiral symmetry breaking is considered, based on the Hamiltonian with an instantaneous potential interaction of fermions. An explicit mass term mΨ-barΨ is included and the physical meaning of the mass parameter is discussed. It is shown that if the Hamiltonian is normal-ordered (i.e. self-energy omitted), then the mass m introduced in the Hamiltonian is not the current mass appearing in the current algebra relations. (author)

  14. and density-dependent quark mass model

    Indian Academy of Sciences (India)

    Since a fair proportion of such dense proto stars are likely to be ... the temperature- and density-dependent quark mass (TDDQM) model which we had em- ployed in .... instead of Tc ~170 MeV which is a favoured value for the ud matter [26].

  15. A Coupled Chemical and Mass Transport Model for Concrete Durability

    DEFF Research Database (Denmark)

    Jensen, Mads Mønster; Johannesson, Björn; Geiker, Mette Rica

    2012-01-01

    In this paper a general continuum theory is used to evaluate the service life of cement based materials, in terms of mass transport processes and chemical degradation of the solid matrix. The model established is a reactive mass transport model, based on an extended version of the Poisson-Nernst-...

  16. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  17. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  18. Design of pulsed perforated-plate columns for industrial scale mass transfer applications - present experience and the need for a model based approach

    International Nuclear Information System (INIS)

    Roy, Amitava

    2010-01-01

    transfer requirements. While following HETS-NTS concept of design, the HETS data is obtained from pilot plant experiment and NTS is obtained from graphical/analytical methods using equilibrium data and operating conditions. The height of the mass transfer section is computed by multiplying HETS with NTS, and then applying a suitable design margin. The total height of the column is then obtained by adding phase separation sections at the top and bottom of mass transfer section. Diameter of the column is computed from the design capacity of the column and flooding correlations or data. The flooding correlations are complex function of system properties, plate geometry, pulse energy input and mass transfer. Due to limitations in the applicability as well as predictability of the correlations, it is often supplemented by experimental data. An integrated design approach is followed to ensure that the process parameters, that is, phase flow rates, pulse velocity, phase separation, dispersed phase hold-up and solvent loading is continuously monitored remotely during column operation. The requirements of fabrication, O and M and safety are also addressed at the time of design. The column performance is assessed in terms of mass transfer efficiency and separation factor. Columns are designed to operate in the dispersion regime for stable and best performance. The HETS-NTS approach is based on experimental data and thus has limitations when compared with model based approach. Whereas, a model based design approach using drop population balance (DPB) and CFD provides an in-depth understanding of hydrodynamics and mass transfer, which is essential for trying various design alternatives and new concepts leading to process intensification and design innovation. (author)

  19. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  20. Color-flavor locked strange quark matter in a mass density-dependent model

    International Nuclear Information System (INIS)

    Chen Yuede; Wen Xinjian

    2007-01-01

    Properties of color-flavor locked (CFL) strange quark matter have been studied in a mass-density-dependent model, and compared with the results in the conventional bag model. In both models, the CFL phase is more stable than the normal nuclear matter for reasonable parameters. However, the lower density behavior of the sound velocity in this model is completely opposite to that in the bag model, which makes the maximum mass of CFL quark stars in the mass-density-dependent model larger than that in the bag model. (authors)

  1. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development

    Science.gov (United States)

    Xu, Fei; Zhang, Yaning; Jin, Guangri; Li, Bingxi; Kim, Yong-Song; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    A three-phase model capable of predicting the heat transfer and moisture migration for soil freezing process was developed based on the Shen-Chen model and the mechanisms of heat and mass transfer in unsaturated soil freezing. The pre-melted film was taken into consideration, and the relationship between film thickness and soil temperature was used to calculate the liquid water fraction in both frozen zone and freezing fringe. The force that causes the moisture migration was calculated by the sum of several interactive forces and the suction in the pre-melted film was regarded as an interactive force between ice and water. Two kinds of resistance were regarded as a kind of body force related to the water films between the ice grains and soil grains, and a block force instead of gravity was introduced to keep balance with gravity before soil freezing. Lattice Boltzmann method was used in the simulation, and the input variables for the simulation included the size of computational domain, obstacle fraction, liquid water fraction, air fraction and soil porosity. The model is capable of predicting the water content distribution along soil depth and variations in water content and temperature during soil freezing process.

  2. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  3. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  4. Automated mass spectrum generation for new physics

    CERN Document Server

    Alloul, Adam; De Causmaecker, Karen; Fuks, Benjamin; Rausch de Traubenberg, Michel

    2013-01-01

    We describe an extension of the FeynRules package dedicated to the automatic generation of the mass spectrum associated with any Lagrangian-based quantum field theory. After introducing a simplified way to implement particle mixings, we present a new class of FeynRules functions allowing both for the analytical computation of all the model mass matrices and for the generation of a C++ package, dubbed ASperGe. This program can then be further employed for a numerical evaluation of the rotation matrices necessary to diagonalize the field basis. We illustrate these features in the context of the Two-Higgs-Doublet Model, the Minimal Left-Right Symmetric Standard Model and the Minimal Supersymmetric Standard Model.

  5. Models of mass segregation at the Galactic Centre

    International Nuclear Information System (INIS)

    Freitag, Marc; Amaro-Seoane, Pau; Kalogera, Vassiliki

    2006-01-01

    We study the process of mass segregation through 2-body relaxation in galactic nuclei with a central massive black hole (MBH). This study has bearing on a variety of astrophysical questions, from the distribution of X-ray binaries at the Galactic centre, to tidal disruptions of main- sequence and giant stars, to inspirals of compact objects into the MBH, an important category of events for the future space borne gravitational wave interferometer LISA. In relatively small galactic nuclei, typical hosts of MBHs with masses in the range 10 4 - 10 7 M o-dot , the relaxation induces the formation of a steep density cusp around the MBH and strong mass segregation. Using a spherical stellar dynamical Monte-Carlo code, we simulate the long-term relaxational evolution of galactic nucleus models with a spectrum of stellar masses. Our focus is the concentration of stellar black holes to the immediate vicinity of the MBH. Special attention is given to models developed to match the conditions in the Milky Way nucleus

  6. Computational Aerodynamic Simulations of a 1215 ft/sec Tip Speed Transonic Fan System Model for Acoustic Methods Assessment and Development

    Science.gov (United States)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.

  7. The complete guide to blender graphics computer modeling and animation

    CERN Document Server

    Blain, John M

    2014-01-01

    Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments

  8. Masses of particles in the SO(18) grand unified model

    International Nuclear Information System (INIS)

    Asatryan, G.M.

    1984-01-01

    The grand unified model based on the orthogonal group SO(18) is treated. The model involves four familiar and four mirror families of fermions. Arising of masses of familiar and mirror particles is studied. The mass of the right-handed Wsub(R) boson interacting via right-handed current way is estimated

  9. Reference absolute and indexed values for left and right ventricular volume, function and mass from cardiac computed tomography

    International Nuclear Information System (INIS)

    Stojanovska, Jadranka; Prasitdumrong, Hutsaya; Patel, Smita; Sundaram, Baskaran; Gross, Barry H.; Yilmaz, Zeynep N.; Kazerooni, Ella A.

    2014-01-01

    Left ventricular (LV) and right ventricular (RV) volumetric and functional parameters are important biomarkers for morbidity and mortality in patients with heart failure. To retrospectively determine reference mean values of LV and RV volume, function and mass normalised by age, gender and body surface area (BSA) from retrospectively electrocardiographically gated 64-slice cardiac computed tomography (CCT) by using automated analysis software in healthy adults. The study was approved by the institutional review board with a waiver of informed consent. Seventy-four healthy subjects (49% female, mean age 49.6±11) free of hypertension and hypercholesterolaemia with a normal CCT formed the study population. Analyses of LV and RV volume (end-diastolic, end-systolic and stroke volumes), function (ejection fraction), LV mass and inter-rater reproducibility were performed with commercially available analysis software capable of automated contour detection. General linear model analysis was performed to assess statistical significance by age group after adjustment for gender and BSA. Bland–Altman analysis assessed the inter-rater agreement. The reference range for LV and RV volume, function, and LV mass was normalised to age, gender and BSA. Statistically significant differences were noted between genders in both LV mass and RV volume (P-value<0.0001). Age, in concert with gender, was associated with significant differences in RV end-diastolic volume and LV ejection fraction (P-values 0.027 and 0.03). Bland–Altman analysis showed acceptable limits of agreement (±1.5% for ejection fraction) without systematic error. LV and RV volume, function and mass normalised to age, gender and BSA can be reported from CCT datasets, providing additional information important for patient management.

  10. Neutrino mass and physics beyond the Standard Model; Masse des Neutrinos et Physique au-dela du Modele Standard

    Energy Technology Data Exchange (ETDEWEB)

    Hosteins, P

    2007-09-15

    The purpose of this thesis is to study, in the neutrino sector, the flavour structures at high energy. The work is divided into two main parts. The first part is dedicated to the well known mechanism to produce small neutrino masses: the seesaw mechanism, which implies the existence of massive particles whose decays violate lepton number. Therefore this mechanism can also be used to generate a net baryon number in the early universe and explain the cosmological observation of the asymmetry between matter and antimatter. However, it is often non-trivial to fulfill the constraints coming at the same time from neutrino oscillations and cosmological experiments, at least in frameworks where the couplings can be somehow constrained, like some Grand Unification models. Therefore we devoted the first part to the study of a certain class of seesaw mechanism which can be found in the context of SO(10) theories for example. We introduce a method to extract the mass matrix of the heavy right-handed neutrinos and explore the phenomenological consequences of this quantity, mainly concerning the production of a sufficient baryon asymmetry. When trying to identify the underlying symmetry governing the mixings between the different generations, we see that there is a puzzling difference between the quark and the lepton sectors. However, the quark and lepton parameters have to be compared at the scale of the flavour symmetry breaking, therefore we have to make them run to the appropriate scale. Thus, it is worthwhile investigating models where quantum corrections allow an approximate unification of quark and lepton mixings. This is why the other part of the thesis investigates the running of the effective neutrino mass operator in models with an extra compact dimension, where quantum corrections to the neutrino masses and mixings can be potentially large due to the multiplicity of states.

  11. Numerical modelling of heat and mass transfer in adsorption solar reactor of ammonia on active carbon

    Science.gov (United States)

    Aroudam, El. H.

    In this paper, we present a modelling of the performance of a reactor of a solar cooling machine based carbon-ammonia activated bed. Hence, for a solar radiation, measured in the Energetic Laboratory of the Faculty of Sciences in Tetouan (northern Morocco), the proposed model computes the temperature distribution, the pressure and the ammonia concentration within the activated carbon bed. The Dubinin-Radushkevich formula is used to compute the ammonia concentration distribution and the daily cycled mass necessary to produce a cooling effect for an ideal machine. The reactor is heated at a maximum temperature during the day and cool at the night. A numerical simulation is carried out employing the recorded solar radiation data measured locally and the daily ambient temperature for the typical clear days. Initially the reactor is at ambient temperature, evaporating pressure; Pev=Pst(Tev=0 ∘C) and maintained at uniform concentration. It is heated successively until the threshold temperature corresponding to the condensing pressure; Pcond=Pst(Tam) (saturation pressure at ambient temperature; in the condenser) and until a maximum temperature at a constant pressure; Pcond. The cooling of the reactor is characterised by a fall of temperature to the minimal values at night corresponding to the end of a daily cycle. We use the mass balance equations as well as energy equation to describe heat and mass transfer inside the medium of three phases. A numerical solution of the obtained non linear equations system based on the implicit finite difference method allows to know all parameters characteristic of the thermodynamic cycle and consider principally the daily evolution of temperature, ammonia concentration for divers positions inside the reactor. The tube diameter of the reactor shows the dependence of the optimum value on meteorological parameters for 1 m2 of collector surface.

  12. On the mass and thermodynamics of the Higgs boson

    Science.gov (United States)

    Fokas, A. S.; Vayenas, C. G.; Grigoriou, D. P.

    2018-02-01

    In two recent works we have shown that the masses of the W± and Zo bosons can be computed from first principles by modeling these bosons as bound relativistic gravitationally confined rotational states consisting of e±-νe pairs in the case of W± bosons and of a e+-νe-e- triplet in the case of the Zo boson. Here, we present similar calculations for the Higgs boson which we model as a bound rotational state consisting of a positron, an electron, a neutrino and an antineutrino. The model contains no adjustable parameters and the computed boson mass of 125.7 GeV/c2, is in very good agreement with the experimental value of 125.1 ± 1 GeV/c2. The thermodynamics and potential connection of this particle with the Higgs field are also briefly addressed.

  13. Computer-aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches

    International Nuclear Information System (INIS)

    Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.

    2008-01-01

    The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the

  14. COMPUTATIONAL MODELING OF AIRFLOW IN NONREGULAR SHAPED CHANNELS

    Directory of Open Access Journals (Sweden)

    A. A. Voronin

    2013-05-01

    Full Text Available The basic approaches to computational modeling of airflow in the human nasal cavity are analyzed. Different models of turbulent flow which may be used in order to calculate air velocity and pressure are discussed. Experimental measurement results of airflow temperature are illustrated. Geometrical model of human nasal cavity reconstructed from computer-aided tomography scans and numerical simulation results of airflow inside this model are also given. Spatial distributions of velocity and temperature for inhaled and exhaled air are shown.

  15. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  16. Editorial: Modelling and computational challenges in granular materials

    NARCIS (Netherlands)

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss

  17. Generating Computational Models for Serious Gaming

    NARCIS (Netherlands)

    Westera, Wim

    2018-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  18. Modelling of a micro Coriolis mass flow sensor for sensitivity improvement

    NARCIS (Netherlands)

    Groenesteijn, Jarno; van de Ridder, Bert; Lötters, Joost Conrad; Wiegerink, Remco J.

    2014-01-01

    We have developed a multi-axis flexible body model with which we can investigate the behavior of (micro) Coriolis mass flow sensors with arbitrary channel geometry. The model has been verified by measurements on five different designs of micro Coriolis mass flow sensors. The model predicts the Eigen

  19. Airfoil Computations using the γ - Reθ Model

    DEFF Research Database (Denmark)

    Sørensen, Niels N.

    computations. Based on this, an estimate of the error in the computations is determined to be approximately one percent in the attached region. Following the verification of the implemented model, the model is applied to four airfoils, NACA64- 018, NACA64-218, NACA64-418 and NACA64-618 and the results...

  20. Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling

    Science.gov (United States)

    Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.

    1999-01-01

    Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.

  1. District element modelling of the rock mass response to glaciation at Finnsjoen, central Sweden

    International Nuclear Information System (INIS)

    Rosengren, L.; Stephansson, O.

    1990-12-01

    Six rock mechanics models of a cross section of the Finnsjoen test site have been simulated by means of distinct element analysis and the computer code UDEC. The rock mass response to glaciation, deglaciation, isostatic movements and water pressure from an ice lake have been simulated. Four of the models use a boundary condition with boundary elements at the bottom and sides of the model. This gives a state of stress inside the model which agrees well with the analytical solution where the horizontal and vertical stresses are almost similar. Roller boundaries were applied to two models. This boundary condition cause zero lateral displacement at the model boundaries and the horizontal stress are always less than the vertical stress. Isostatic movements were simulated in one model. Two different geometries of fracture Zone 2 were simulated. Results from modelling the two different geometries show minor changes in stresses, displacements and failure of fracture zones. Under normal pore pressure conditions in the rock mass the weight of the ice load increases the vertical stresses in the models differ depending on the boundary condition. An ice thickness of 3 km and 1 km and an ice wedge of 1 km thickness covering half the top surface of the model have been simulated. For each loading sequence of the six models a complete set of data about normal stress, stress profiles along selected sections, displacements and failure of fracture zones are presented. Based on the results of this study a protection zone of about 100 m width from the outer boundary of stress discontinuity to the repository location is suggested. This value is based on the result that the stress disturbance diminishes at this distance from the outer boundary of the discontinuity. (25 refs.) (authors)

  2. Computational Fluid Dynamic Modeling of Zinc Slag Fuming Process in Top-Submerged Lance Smelting Furnace

    Science.gov (United States)

    Huda, Nazmul; Naser, Jamal; Brooks, Geoffrey; Reuter, Markus A.; Matusewicz, Robert W.

    2012-02-01

    Slag fuming is a reductive treatment process for molten zinciferous slags for extracting zinc in the form of metal vapor by injecting or adding a reductant source such as pulverized coal or lump coal and natural gas. A computational fluid dynamic (CFD) model was developed to study the zinc slag fuming process from imperial smelting furnace (ISF) slag in a top-submerged lance furnace and to investigate the details of fluid flow, reaction kinetics, and heat transfer in the furnace. The model integrates combustion phenomena and chemical reactions with the heat, mass, and momentum interfacial interaction between the phases present in the system. A commercial CFD package AVL Fire 2009.2 (AVL, Graz, Austria) coupled with a number of user-defined subroutines in FORTRAN programming language were used to develop the model. The model is based on three-dimensional (3-D) Eulerian multiphase flow approach, and it predicts the velocity and temperature field of the molten slag bath, generated turbulence, and vortex and plume shape at the lance tip. The model also predicts the mass fractions of slag and gaseous components inside the furnace. The model predicted that the percent of ZnO in the slag bath decreases linearly with time and is consistent broadly with the experimental data. The zinc fuming rate from the slag bath predicted by the model was validated through macrostep validation process against the experimental study of Waladan et al. The model results predicted that the rate of ZnO reduction is controlled by the mass transfer of ZnO from the bulk slag to slag-gas interface and rate of gas-carbon reaction for the specified simulation time studied. Although the model is based on zinc slag fuming, the basic approach could be expanded or applied for the CFD analysis of analogous systems.

  3. A System Computational Model of Implicit Emotional Learning.

    Science.gov (United States)

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.

  4. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  5. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  6. Cosmic logic: a computational model

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps

  7. Computation with Inverse States in a Finite Field FPα: The Muon Neutrino Mass, the Unified Strong-Electroweak Coupling Constant, and the Higgs Mass

    International Nuclear Information System (INIS)

    Dai, Yang; Borisov, Alexey B.; Boyer, Keith; Rhodes, Charles K.

    2000-01-01

    The construction of inverse states in a finite field F P α enables the organization of the mass scale with fundamental octets in an eight-dimensional index space that identifies particle states with residue class designations. Conformance with both CPT invariance and the concept of supersymmetry follows as a direct consequence of this formulation. Based on two parameters (P α and g α ) that are anchored on a concordance of physical data, this treatment leads to (1) a prospective mass for the muon neutrino of approximately27.68 meV, (2) a value of the unified strong-electroweak coupling constant α* = (34.26) -1 that is physically defined by the ratio of the electron neutrino and muon neutrino masses, and (3) a see-saw congruence connecting the Higgs, the electron neutrino, and the muon neutrino masses. Specific evaluation of the masses of the corresponding supersymmetric Higgs pair reveals that both particles are superheavy (> 10 18 GeV). No renormalization of the Higgs masses is introduced, since the calculational procedure yielding their magnitudes is intrinsically divergence-free. Further, the Higgs fulfills its conjectured role through the see-saw relation as the particle defining the origin of all particle masses, since the electron and muon neutrino systems, together with their supersymmetric partners, are the generators of the mass scale and establish the corresponding index space. Finally, since the computation of the Higgs masses is entirely determined by the modulus of the field P α , which is fully defined by the large-scale parameters of the universe through the value of the universal gravitational constant G and the requirement for perfect flatness (Omega = 1.0), the see-saw congruence fuses the concepts of mass and space and creates a new unified archetype

  8. Computational modelling of the impact of AIDS on business.

    Science.gov (United States)

    Matthews, Alan P

    2007-07-01

    An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.

  9. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  10. Speeding up low-mass planetary microlensing simulations and modeling: The caustic region of influence

    International Nuclear Information System (INIS)

    Penny, Matthew T.

    2014-01-01

    Extensive simulations of planetary microlensing are necessary both before and after a survey is conducted: before to design and optimize the survey and after to understand its detection efficiency. The major bottleneck in such computations is the computation of light curves. However, for low-mass planets, most of these computations are wasteful, as most light curves do not contain detectable planetary signatures. In this paper, I develop a parameterization of the binary microlens that is conducive to avoiding light curve computations. I empirically find analytic expressions describing the limits of the parameter space that contain the vast majority of low-mass planet detections. Through a large-scale simulation, I measure the (in)completeness of the parameterization and the speed-up it is possible to achieve. For Earth-mass planets in a wide range of orbits, it is possible to speed up simulations by a factor of ∼30-125 (depending on the survey's annual duty-cycle) at the cost of missing ∼1% of detections (which is actually a smaller loss than for the arbitrary parameter limits typically applied in microlensing simulations). The benefits of the parameterization probably outweigh the costs for planets below 100 M ⊕ . For planets at the sensitivity limit of AFTA-WFIRST, simulation speed-ups of a factor ∼1000 or more are possible.

  11. Computational modeling and engineering in pediatric and congenital heart disease.

    Science.gov (United States)

    Marsden, Alison L; Feinstein, Jeffrey A

    2015-10-01

    Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single-ventricle patients, and provide an overview of emerging areas. Multiscale modeling combining patient-specific hemodynamics with reduced order (i.e., mathematically and computationally simplified) circulatory models has become the de-facto standard for modeling local hemodynamics and 'global' circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods (e.g., fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally derived surgical methods for single-ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot (and pulmonary tree), and circulatory support. Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases.

  12. Masses and Regge trajectories of triply heavy Ω{sub ccc} and Ω{sub bbb} baryons

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Zalak; Rai, Ajay Kumar [Sardar Vallabhbhai National Institute of Technology, Department of Applied Physics, Surat, Gujarat (India)

    2017-10-15

    The excited state masses of triply charm and triply bottom Ω baryons are exhibited in the present study. The masses are computed for 1S-5S, 1P-5P, 1D-4D and 1F-2F states in the Hypercentral Constituent Quark Model (hCQM) with the hyper Coulomb plus linear potential. The triply charm/bottom baryon masses are experimentally unknown so that the Regge trajectories are plotted using computed masses to assign the quantum numbers of these unknown states. (orig.)

  13. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  14. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  15. Computational Modeling Using OpenSim to Simulate a Squat Exercise Motion

    Science.gov (United States)

    Gallo, C. A.; Thompson, W. K.; Lewandowski, B. E.; Humphreys, B. T.; Funk, J. H.; Funk, N. H.; Weaver, A. S.; Perusek, G. P.; Sheehan, C. C.; Mulugeta, L.

    2015-01-01

    Long duration space travel to destinations such as Mars or an asteroid will expose astronauts to extended periods of reduced gravity. Astronauts will use an exercise regime for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Since the area available in the spacecraft for an exercise device is limited and gravity is not present to aid loading, compact resistance exercise device prototypes are being developed. Since it is difficult to rigorously test these proposed devices in space flight, computational modeling provides an estimation of the muscle forces, joint torques and joint loads during exercise to gain insight on the efficacy to protect the musculoskeletal health of astronauts.

  16. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  17. On turbulence models for rod bundle flow computations

    International Nuclear Information System (INIS)

    Hazi, Gabor

    2005-01-01

    In commercial computational fluid dynamics codes there is more than one turbulence model built in. It is the user responsibility to choose one of those models, suitable for the problem studied. In the last decade, several computations were presented using computational fluid dynamics for the simulation of various problems of the nuclear industry. A common feature in a number of those simulations is that they were performed using the standard k-ε turbulence model without justifying the choice of the model. The simulation results were rarely satisfactory. In this paper, we shall consider the flow in a fuel rod bundle as a case study and discuss why the application of the standard k-ε model fails to give reasonable results in this situation. We also show that a turbulence model based on the Reynolds stress transport equations can provide qualitatively correct results. Generally, our aim is pedagogical, we would like to call the readers attention to the fact that turbulence models have to be selected based on theoretical considerations and/or adequate information obtained from measurements

  18. Assessment of weld thickness loss in offshore pipelines using computed radiography and computational modeling

    International Nuclear Information System (INIS)

    Correa, S.C.A.; Souza, E.M.; Oliveira, D.F.; Silva, A.X.; Lopes, R.T.; Marinho, C.; Camerini, C.S.

    2009-01-01

    In order to guarantee the structural integrity of oil plants it is crucial to monitor the amount of weld thickness loss in offshore pipelines. However, in spite of its relevance, this parameter is very difficult to determine, due to both the large diameter of most pipes and the complexity of the multi-variable system involved. In this study, a computational modeling based on Monte Carlo MCNPX code is combined with computed radiography to estimate the weld thickness loss in large-diameter offshore pipelines. Results show that computational modeling is a powerful tool to estimate intensity variations in radiographic images generated by weld thickness variations, and it can be combined with computed radiography to assess weld thickness loss in offshore and subsea pipelines.

  19. Modeling of heat and mass transfer processes during core melt discharge from a reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Dinh, T.N.; Bui, V.A.; Nourgaliev, R.R. [Royal Institute of Technology, Stockholm (Sweden)] [and others

    1995-09-01

    The objective of the paper is to study heat and mass transfer processes related to core melt discharge from a reactor vessel is a severe light water reactor accident. The phenomenology of the issue includes (1) melt convection in and heat transfer from the melt pool in contact with the vessel lower head wall; (2) fluid dynamics and heat transfer of the melt flow in the growing discharge hole; and (3) multi-dimensional heat conduction in the ablating lower head wall. A program of model development, validation and application is underway (i) to analyse the dominant physical mechanisms determining characteristics of the lower head ablation process; (ii) to develop and validate efficient analytic/computational methods for estimating heat and mass transfer under phase-change conditions in irregular moving-boundary domains; and (iii) to investigate numerically the melt discharge phenomena in a reactor-scale situation, and, in particular, the sensitivity of the melt discharge transient to structural differences and various in-vessel melt progression scenarios. The paper presents recent results of the analysis and model development work supporting the simulant melt-structure interaction experiments.

  20. Indicators of Mass in Spherical Stellar Atmospheres

    Science.gov (United States)

    Lester, John B.; Dinshaw, Rayomond; Neilson, Hilding R.

    2013-04-01

    Mass is the most important stellar parameter, but it is not directly observable for a single star. Spherical model stellar atmospheres are explicitly characterized by their luminosity ( L⋆), mass ( M⋆), and radius ( R⋆), and observations can now determine directly L⋆ and R⋆. We computed spherical model atmospheres for red giants and for red supergiants holding L⋆ and R⋆ constant at characteristic values for each type of star but varying M⋆, and we searched the predicted flux spectra and surface-brightness distributions for features that changed with mass. For both stellar classes we found similar signatures of the stars’ mass in both the surface-brightness distribution and the flux spectrum. The spectral features have been use previously to determine log 10(g), and now that the luminosity and radius of a non-binary red giant or red supergiant can be observed, spherical model stellar atmospheres can be used to determine a star’s mass from currently achievable spectroscopy. The surface-brightness variations of mass are slightly smaller than can be resolved by current stellar imaging, but they offer the advantage of being less sensitive to the detailed chemical composition of the atmosphere.

  1. Computer-aided detection of breast masses: Four-view strategy for screening mammography

    International Nuclear Information System (INIS)

    Wei Jun; Chan Heangping; Zhou Chuan; Wu Yita; Sahiner, Berkman; Hadjiiski, Lubomir M.; Roubidoux, Marilyn A.; Helvie, Mark A.

    2011-01-01

    Purpose: To improve the performance of a computer-aided detection (CAD) system for mass detection by using four-view information in screening mammography. Methods: The authors developed a four-view CAD system that emulates radiologists' reading by using the craniocaudal and mediolateral oblique views of the ipsilateral breast to reduce false positives (FPs) and the corresponding views of the contralateral breast to detect asymmetry. The CAD system consists of four major components: (1) Initial detection of breast masses on individual views, (2) information fusion of the ipsilateral views of the breast (referred to as two-view analysis), (3) information fusion of the corresponding views of the contralateral breast (referred to as bilateral analysis), and (4) fusion of the four-view information with a decision tree. The authors collected two data sets for training and testing of the CAD system: A mass set containing 389 patients with 389 biopsy-proven masses and a normal set containing 200 normal subjects. All cases had four-view mammograms. The true locations of the masses on the mammograms were identified by an experienced MQSA radiologist. The authors randomly divided the mass set into two independent sets for cross validation training and testing. The overall test performance was assessed by averaging the free response receiver operating characteristic (FROC) curves of the two test subsets. The FP rates during the FROC analysis were estimated by using the normal set only. The jackknife free-response ROC (JAFROC) method was used to estimate the statistical significance of the difference between the test FROC curves obtained with the single-view and the four-view CAD systems. Results: Using the single-view CAD system, the breast-based test sensitivities were 58% and 77% at the FP rates of 0.5 and 1.0 per image, respectively. With the four-view CAD system, the breast-based test sensitivities were improved to 76% and 87% at the corresponding FP rates, respectively

  2. Models for predicting the mass of lime fruits by some engineering properties.

    Science.gov (United States)

    Miraei Ashtiani, Seyed-Hassan; Baradaran Motie, Jalal; Emadi, Bagher; Aghkhani, Mohammad-Hosein

    2014-11-01

    Grading fruits based on mass is important in packaging and reduces the waste, also increases the marketing value of agricultural produce. The aim of this study was mass modeling of two major cultivars of Iranian limes based on engineering attributes. Models were classified into three: 1-Single and multiple variable regressions of lime mass and dimensional characteristics. 2-Single and multiple variable regressions of lime mass and projected areas. 3-Single regression of lime mass based on its actual volume and calculated volume assumed as ellipsoid and prolate spheroid shapes. All properties considered in the current study were found to be statistically significant (ρ lime based on minor diameter and first projected area are the most appropriate models in the first and the second classifications, respectively. In third classification, the best model was obtained on the basis of the prolate spheroid volume. It was finally concluded that the suitable grading system of lime mass is based on prolate spheroid volume.

  3. Baryons electromagnetic mass splittings in potential models

    International Nuclear Information System (INIS)

    Genovese, M.; Richard, J.-M.; Silvestre-Brac, B.; Varga, K.

    1998-01-01

    We study electromagnetic mass splittings of charmed baryons. We point out discrepancies among theoretical predictions in non-relativistic potential models; none of these predictions seems supported by experimental data. A new calculation is presented

  4. A Mass Loss Penetration Model to Investigate the Dynamic Response of a Projectile Penetrating Concrete considering Mass Abrasion

    Directory of Open Access Journals (Sweden)

    NianSong Zhang

    2015-01-01

    Full Text Available A study on the dynamic response of a projectile penetrating concrete is conducted. The evolutional process of projectile mass loss and the effect of mass loss on penetration resistance are investigated using theoretical methods. A projectile penetration model considering projectile mass loss is established in three stages, namely, cratering phase, mass loss penetration phase, and remainder rigid projectile penetration phase.

  5. Relationship between body mass, lean mass, fat mass, and limb bone cross-sectional geometry: Implications for estimating body mass and physique from the skeleton.

    Science.gov (United States)

    Pomeroy, Emma; Macintosh, Alison; Wells, Jonathan C K; Cole, Tim J; Stock, Jay T

    2018-05-01

    Estimating body mass from skeletal dimensions is widely practiced, but methods for estimating its components (lean and fat mass) are poorly developed. The ability to estimate these characteristics would offer new insights into the evolution of body composition and its variation relative to past and present health. This study investigates the potential of long bone cross-sectional properties as predictors of body, lean, and fat mass. Humerus, femur and tibia midshaft cross-sectional properties were measured by peripheral quantitative computed tomography in sample of young adult women (n = 105) characterized by a range of activity levels. Body composition was estimated from bioimpedance analysis. Lean mass correlated most strongly with both upper and lower limb bone properties (r values up to 0.74), while fat mass showed weak correlations (r ≤ 0.29). Estimation equations generated from tibial midshaft properties indicated that lean mass could be estimated relatively reliably, with some improvement using logged data and including bone length in the models (minimum standard error of estimate = 8.9%). Body mass prediction was less reliable and fat mass only poorly predicted (standard errors of estimate ≥11.9% and >33%, respectively). Lean mass can be predicted more reliably than body mass from limb bone cross-sectional properties. The results highlight the potential for studying evolutionary trends in lean mass from skeletal remains, and have implications for understanding the relationship between bone morphology and body mass or composition. © 2018 The Authors. American Journal of Physical Anthropology Published by Wiley Periodicals, Inc.

  6. Modification of the finite element heat and mass transfer code (FEHMN) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1995-01-01

    The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  7. Large mass storage facility

    International Nuclear Information System (INIS)

    Peskin, A.M.

    1978-01-01

    The report of a committee to study the questions surrounding possible acquisition of a large mass-storage device is presented. The current computing environment at BNL and justification for an online large mass storage device are briefly discussed. Possible devices to meet the requirements of large mass storage are surveyed, including future devices. The future computing needs of BNL are prognosticated. 2 figures, 4 tables

  8. Introduction to computation and modeling for differential equations

    CERN Document Server

    Edsberg, Lennart

    2008-01-01

    An introduction to scientific computing for differential equationsIntroduction to Computation and Modeling for Differential Equations provides a unified and integrated view of numerical analysis, mathematical modeling in applications, and programming to solve differential equations, which is essential in problem-solving across many disciplines, such as engineering, physics, and economics. This book successfully introduces readers to the subject through a unique ""Five-M"" approach: Modeling, Mathematics, Methods, MATLAB, and Multiphysics. This approach facilitates a thorough understanding of h

  9. A novel patient-specific model to compute coronary fractional flow reserve.

    Science.gov (United States)

    Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo

    2014-09-01

    The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.

  10. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  11. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  12. VIRTUAL REALITY FOR MANAGEMENT OF SITUATIONAL AWARENESS DURING GLOBAL MASS GATHERINGS

    Directory of Open Access Journals (Sweden)

    A. S. Karsakov

    2017-01-01

    Full Text Available This paper presents a training technology for staff of mass events for development of action skills in large gatherings of people, including crowd dynamic management and actions in extreme situations caused by the panic. The technology is based on the multi-agent model of crowd dynamic with dynamically re-computable navigation fields. We implemented the software system that provides a collaborative and distributed process of training activities in the virtual reality environment. The following characteristics of the developed software system available from experimental studies were analyzed: computational intensity of simulations, scalability of rendering system and reactivity of the final system when rendering computationally intensive scenes. The proposed models and infrastructure for training through collaborative immersion in the virtual reality can improve situational awareness of events staff prior to the event. The developed technology is a unique tool for improving the quality and safety of disposable and unique events involving the broad masses of people, including unfunded by retrospective experience mass gatherings. Developed technology was tested within the Kumbh Mela festival in Ujjain, India.

  13. Mass Transfer Model for a Breached Waste Package

    International Nuclear Information System (INIS)

    Hsu, C.; McClure, J.

    2004-01-01

    The degradation of waste packages, which are used for the disposal of spent nuclear fuel in the repository, can result in configurations that may increase the probability of criticality. A mass transfer model is developed for a breached waste package to account for the entrainment of insoluble particles. In combination with radionuclide decay, soluble advection, and colloidal transport, a complete mass balance of nuclides in the waste package becomes available. The entrainment equations are derived from dimensionless parameters such as drag coefficient and Reynolds number and based on the assumption that insoluble particles are subjected to buoyant force, gravitational force, and drag force only. Particle size distributions are utilized to calculate entrainment concentration along with geochemistry model abstraction to calculate soluble concentration, and colloid model abstraction to calculate colloid concentration and radionuclide sorption. Results are compared with base case geochemistry model, which only considers soluble advection loss

  14. Pseudoscaler meson masses in the quark model

    International Nuclear Information System (INIS)

    Karl, G.

    1976-10-01

    Pseudoscaler meson masses and sum rules are compared in two different limits of a quark model with 4 quarks. The conventional limit corresponds to a heavy c anti c state and generalizes ideal mixing in a nonet. The second limit corresponds to a missing SU 4 unitary singlet and appears more relevant to the masses of π, K, eta, eta'. If SU 3 is broken only by the mass difference between the strange and nonstrange quarks, the physical masses imply that the u anti u, d anti d and s anti s pairs account only for 33% of the composition of the eta'(960), while for the eta(548) this fraction is 86%. If some of the remaining matter is in the form of the constituents of J/psi, the relative proportion of the relative decays J/psi → eta γ vs J/psi → etaγ is accounted for in satisfactory agreement with experiment. (author)

  15. Computational model for dosimetric purposes in dental procedures

    International Nuclear Information System (INIS)

    Kawamoto, Renato H.; Campos, Tarcisio R.

    2013-01-01

    This study aims to develop a computational model for dosimetric purposes the oral region, based on computational tools SISCODES and MCNP-5, to predict deterministic effects and minimize stochastic effects caused by ionizing radiation by radiodiagnosis. Based on a set of digital information provided by computed tomography, three-dimensional voxel model was created, and its tissues represented. The model was exported to the MCNP code. In association with SICODES, we used the Monte Carlo N-Particle Transport Code (MCNP-5) method to play the corresponding interaction of nuclear particles with human tissues statistical process. The study will serve as a source of data for dosimetric studies in the oral region, providing deterministic effect and minimize the stochastic effect of ionizing radiation

  16. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  17. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  18. Midlatitude Forcing Mechanisms for Glacier Mass Balance Investigated Using General Circulation Models

    NARCIS (Netherlands)

    Reichert, B.K.; Bengtsson, L.; Oerlemans, J.

    2001-01-01

    A process-oriented modeling approach is applied in order to simulate glacier mass balance for individual glaciers using statistically downscaled general circulation models (GCMs). Glacier-specific seasonal sensitivity characteristics based on a mass balance model of intermediate complexity are used

  19. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  20. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  1. Mass and overall optimization of radiator design

    Directory of Open Access Journals (Sweden)

    Shilo G. N.

    2011-04-01

    Full Text Available The models of finned radiator are formed by computing aided engineering systems. The relations between sizes of construction elements and boundaries of operability domain are obtained for radiators of minimal mass, minimal volume and minimal overall parameters. Iteration algorithm is used. The non-linear characteristics of weight functions and allowable input heat resistances of radiator are applied in the algorithm. Mass and overall parameters of standard and optimal radiator are defined by different strategies.

  2. Getting computer models to communicate

    International Nuclear Information System (INIS)

    Caremoli, Ch.; Erhard, P.

    1999-01-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  3. Mass generation in perturbed massless integrable models

    International Nuclear Information System (INIS)

    Controzzi, D.; Mussardo, G.

    2005-01-01

    We extend form-factor perturbation theory to non-integrable deformations of massless integrable models, in order to address the problem of mass generation in such systems. With respect to the standard renormalisation group analysis this approach is more suitable for studying the particle content of the perturbed theory. Analogously to the massive case, interesting information can be obtained already at first order, such as the identification of the operators which create a mass gap and those which induce the confinement of the massless particles in the perturbed theory

  4. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    Science.gov (United States)

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  5. Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.

    Directory of Open Access Journals (Sweden)

    Alberto Mazzoni

    2015-12-01

    Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  6. Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?

    OpenAIRE

    Bogataj Habjan, Kristina; Pucihar, Andreja

    2017-01-01

    This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...

  7. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  8. Progresses on the computation of added masses for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1985-01-01

    The problem of coupled vibrations of fluids and structures is analyzed, in the case of irrotational incompressible fluid fields the effect is modelled as an added mass matrix. The Modified Boundary Elements technique is used; a particular case (cylindrical reservois with sloshing) and the general case are examined. (orig.)

  9. Distribution of mass in the planetary system and solar nebulae

    Energy Technology Data Exchange (ETDEWEB)

    Weidenschilling, S J [Carnegie Institution of Washington, D.C. (USA). Dept. of Terrestrial Magnetism

    1977-09-01

    A model 'solar nebula' is constructed by adding the solar complement of light elements to each planet, using recent models of planetary compositions. Uncertainties in this approach are estimated. The computed surface density varies approximately as rsup(-3/2). Mercury, Mars and the asteroid belt are anomalously low in mass, but processes exist which would preferentially remove matter from these regions. Planetary masses and compositions are generally consistent with a monotonic density distribution in the primordial solar nebula.

  10. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  11. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  12. Static-light meson masses from twisted mass lattice QCD

    International Nuclear Information System (INIS)

    Jansen, Karl; Michael, Chris; Shindler, Andrea; Wagner, Marc

    2008-08-01

    We compute the static-light meson spectrum using two-flavor Wilson twisted mass lattice QCD. We have considered five different values for the light quark mass corresponding to 300 MeV PS S mesons. (orig.)

  13. Prolegomena to any future computer evaluation of the QCD mass spectrum

    International Nuclear Information System (INIS)

    Parisi, G.

    1984-01-01

    In recent years we have seen many computer based evaluations of the QCD mass spectrum. At the present moment a reliable control of the systematic errors is not yet achieved; as far as the main sources of systematic errors are the non zero values of the lattice spacing and the finite size of the box, in which the hadrons are confined, we need to do extensive computations on lattices of different shapes in order to be able to extrapolate to zero lattice spacing and to infinite box. While it is necessary to go to larger lattices, we also need efficient algorithms in order to minimize the statistical and systematic errors and to decrease the CPU time (and the memory) used in the computation. In these lectures the reader will find a review of the most common algorithms (with the exclusion of the application to gauge theories of the hopping parameter expansion in the form proposed: it can be found in Montvay's contribution to this school); the weak points of the various algorithms are discussed and, when possible, the way to improve them is suggested. For reader convenience the basic formulae are recalled in the second section; in section three we find a discussion of finite volume effects, while the effects of a finite lattice spacing are discussed in section four; some techniques for fighting against the statistical errors and the critical slowing down are found in section five and six respectively. Finally the conclusions are in section seven

  14. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    Science.gov (United States)

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  15. Modeling the influence of coupled mass transfer processes on mass flux downgradient of heterogeneous DNAPL source zones.

    Science.gov (United States)

    Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M

    2018-04-01

    Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of

  16. Modeling the influence of coupled mass transfer processes on mass flux downgradient of heterogeneous DNAPL source zones

    Science.gov (United States)

    Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.

    2018-04-01

    Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of

  17. Computational challenges in modeling gene regulatory events.

    Science.gov (United States)

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  18. Heterogeneous studies in pulping of wood: Modelling mass transfer of alkali

    OpenAIRE

    Simão, João P. F.; Egas, Ana P. V.; Carvalho, M. Graça; Baptista, Cristina M. S. G.; Castro, José Almiro A. M.

    2008-01-01

    In this paper a heterogeneous lumped parameter model is proposed to describe the mass transfer of effective alkali during the kraft pulping of wood. This model, based on the spatial mean of the concentration profile of effective alkali along the chip thickness, enables the estimation of the effective diffusion coefficient that characterizes the internal resistance to mass transfer and the contribution of the external resistance to mass transfer which has often been neglected. http://www.sc...

  19. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  20. External validation of a publicly available computer assisted diagnostic tool for mammographic mass lesions with two high prevalence research datasets.

    Science.gov (United States)

    Benndorf, Matthias; Burnside, Elizabeth S; Herda, Christoph; Langer, Mathias; Kotter, Elmar

    2015-08-01

    Lesions detected at mammography are described with a highly standardized terminology: the breast imaging-reporting and data system (BI-RADS) lexicon. Up to now, no validated semantic computer assisted classification algorithm exists to interactively link combinations of morphological descriptors from the lexicon to a probabilistic risk estimate of malignancy. The authors therefore aim at the external validation of the mammographic mass diagnosis (MMassDx) algorithm. A classification algorithm like MMassDx must perform well in a variety of clinical circumstances and in datasets that were not used to generate the algorithm in order to ultimately become accepted in clinical routine. The MMassDx algorithm uses a naïve Bayes network and calculates post-test probabilities of malignancy based on two distinct sets of variables, (a) BI-RADS descriptors and age ("descriptor model") and (b) BI-RADS descriptors, age, and BI-RADS assessment categories ("inclusive model"). The authors evaluate both the MMassDx (descriptor) and MMassDx (inclusive) models using two large publicly available datasets of mammographic mass lesions: the digital database for screening mammography (DDSM) dataset, which contains two subsets from the same examinations-a medio-lateral oblique (MLO) view and cranio-caudal (CC) view dataset-and the mammographic mass (MM) dataset. The DDSM contains 1220 mass lesions and the MM dataset contains 961 mass lesions. The authors evaluate discriminative performance using area under the receiver-operating-characteristic curve (AUC) and compare this to the BI-RADS assessment categories alone (i.e., the clinical performance) using the DeLong method. The authors also evaluate whether assigned probabilistic risk estimates reflect the lesions' true risk of malignancy using calibration curves. The authors demonstrate that the MMassDx algorithms show good discriminatory performance. AUC for the MMassDx (descriptor) model in the DDSM data is 0.876/0.895 (MLO/CC view) and AUC

  1. Efficiency using computer simulation of Reverse Threshold Model Theory on assessing a “One Laptop Per Child” computer versus desktop computer

    Directory of Open Access Journals (Sweden)

    Supat Faarungsang

    2017-04-01

    Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.

  2. Model Persamaan Massa Karbon Akar Pohon dan Root-Shoot Ratio Massa Karbon (Equation Models of Tree Root Carbon Mass and Root-Shoot Carbon Mass Ratio

    Directory of Open Access Journals (Sweden)

    Elias .

    2011-03-01

    Full Text Available The case study was conducted in the area of Acacia mangium plantation at BKPH Parung Panjang, KPH Bogor. The objective of the study was to formulate equation models of tree root carbon mass and root to shoot carbon mass ratio of the plantation. It was found that carbon content in the parts of tree biomass (stems, branches, twigs, leaves, and roots was different, in which the highest and the lowest carbon content was in the main stem of the tree and in the leaves, respectively. The main stem and leaves of tree accounted for 70% of tree biomass. The root-shoot ratio of root biomass to tree biomass above the ground and the root-shoot ratio of root biomass to main stem biomass was 0.1443 and 0.25771, respectively, in which 75% of tree carbon mass was in the main stem and roots of tree. It was also found that the root-shoot ratio of root carbon mass to tree carbon mass above the ground and the root-shoot ratio of root carbon mass to tree main stem carbon mass was 0.1442 and 0.2034, respectively. All allometric equation models of tree root carbon mass of A. mangium have a high goodness-of-fit as indicated by its high adjusted R2.Keywords: Acacia mangium, allometric, root-shoot ratio, biomass, carbon mass

  3. Computational model for transient studies of IRIS pressurizer behavior

    International Nuclear Information System (INIS)

    Rives Sanz, R.; Montesino Otero, M.E.; Gonzalez Mantecon, J.; Rojas Mazaira, L.

    2014-01-01

    International Reactor Innovative and Secure (IRIS) excels other Small Modular Reactor (SMR) designs due to its innovative characteristics regarding safety. IRIS integral pressurizer makes the design of larger pressurizer system than the conventional PWR, without any additional cost. The IRIS pressurizer volume of steam can provide enough margins to avoid spray requirement to mitigate in-surge transient. The aim of the present research is to model the IRIS pressurizer's dynamic using the commercial finite volume Computational Fluid Dynamic code CFX 14. A symmetric tridimensional model equivalent to 1/8 of the total geometry was adopted to reduce mesh size and minimize processing time. The model considers the coexistence of three phases: liquid, steam, and vapor bubbles in liquid volume. Additionally, it takes into account the heat losses between the pressurizer and primary circuit. The relationships for interfacial mass, energy, and momentum transport are programmed and incorporated into CFX by using expressions in CFX Command Language (CCL) format. Moreover, several additional variables are defined for improving the convergence and allow monitoring of boron dilution sequences and condensation-evaporation rate in different control volumes. For transient states a non - equilibrium stratification in the pressurizer is considered. This paper discusses the model developed and the behavior of the system for representative transients sequences such as the in/out-surge transients and boron dilution sequences. The results of analyzed transients of IRIS can be applied to the design of pressurizer internal structures and components. (author)

  4. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  5. Pentaquarks in chiral color dielectric model

    Indian Academy of Sciences (India)

    Recent experiments indicate that a narrow baryonic state having strangeness +1 and mass of about 1540 MeV may be existing. Such a state was predicted in chiral model by Diakonov et al. In this work I compute the mass and width of this state in chiral color dielectric model. I show that the computed width is about 30 MeV.

  6. BLACK HOLE-NEUTRON STAR MERGERS AND SHORT GAMMA-RAY BURSTS: A RELATIVISTIC TOY MODEL TO ESTIMATE THE MASS OF THE TORUS

    International Nuclear Information System (INIS)

    Pannarale, Francesco; Tonita, Aaryn; Rezzolla, Luciano

    2011-01-01

    The merger of a binary system composed of a black hole (BH) and a neutron star (NS) may leave behind a torus of hot, dense matter orbiting around the BH. While numerical-relativity simulations are necessary to simulate this process accurately, they are also computationally expensive and unable at present to cover the large space of possible parameters, which include the relative mass ratio, the stellar compactness, and the BH spin. To mitigate this and provide a first reasonable coverage of the space of parameters, we have developed a method for estimating the mass of the remnant torus from BH-NS mergers. The toy model makes use of an improved relativistic affine model to describe the tidal deformations of an extended tri-axial ellipsoid orbiting around a Kerr BH and measures the mass of the remnant torus by considering which of the fluid particles composing the star are on bound orbits at the time of the tidal disruption. We tune the toy model by using the results of fully general-relativistic simulations obtaining relative precisions of a few percent and use it to investigate the space of parameters extensively. In this way, we find that the torus mass is largest for systems with highly spinning BHs, small stellar compactnesses, and large mass ratios. As an example, tori as massive as M b,tor ≅ 1.33 M sun can be produced for a very extended star with compactness C ≅ 0.1 inspiralling around a BH with dimensionless spin parameter a = 0.85 and mass ratio q ≅ 0.3. However, for a more astrophysically reasonable mass ratio q ≅ 0.14 and a canonical value of the stellar compactness C ≅ 0.145, the toy model sets a considerably smaller upper limit of M b,tor ∼ sun .

  7. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  8. Integrating Urban Infrastructure and Health System Impact Modeling for Disasters and Mass-Casualty Events

    Science.gov (United States)

    Balbus, J. M.; Kirsch, T.; Mitrani-Reiser, J.

    2017-12-01

    Over recent decades, natural disasters and mass-casualty events in United States have repeatedly revealed the serious consequences of health care facility vulnerability and the subsequent ability to deliver care for the affected people. Advances in predictive modeling and vulnerability assessment for health care facility failure, integrated infrastructure, and extreme weather events have now enabled a more rigorous scientific approach to evaluating health care system vulnerability and assessing impacts of natural and human disasters as well as the value of specific interventions. Concurrent advances in computing capacity also allow, for the first time, full integration of these multiple individual models, along with the modeling of population behaviors and mass casualty responses during a disaster. A team of federal and academic investigators led by the National Center for Disaster Medicine and Public Health (NCDMPH) is develoing a platform for integrating extreme event forecasts, health risk/impact assessment and population simulations, critical infrastructure (electrical, water, transportation, communication) impact and response models, health care facility-specific vulnerability and failure assessments, and health system/patient flow responses. The integration of these models is intended to develop much greater understanding of critical tipping points in the vulnerability of health systems during natural and human disasters and build an evidence base for specific interventions. Development of such a modeling platform will greatly facilitate the assessment of potential concurrent or sequential catastrophic events, such as a terrorism act following a severe heat wave or hurricane. This presentation will highlight the development of this modeling platform as well as applications not just for the US health system, but also for international science-based disaster risk reduction efforts, such as the Sendai Framework and the WHO SMART hospital project.

  9. A spatial estimation model for continuous rock mass characterization from the specific energy of a TBM

    Science.gov (United States)

    Exadaktylos, G.; Stavropoulou, M.; Xiroudakis, G.; de Broissia, M.; Schwarz, H.

    2008-12-01

    Basic principles of the theory of rock cutting with rolling disc cutters are used to appropriately reduce tunnel boring machine (TBM) logged data and compute the specific energy (SE) of rock cutting as a function of geometry of the cutterhead and operational parameters. A computational code written in Fortran 77 is used to perform Kriging predictions in a regular or irregular grid in 1D, 2D or 3D space based on sampled data referring to rock mass classification indices or TBM related parameters. This code is used here for three purposes, namely: (1) to filter raw data in order to establish a good correlation between SE and rock mass rating (RMR) (or tunnelling quality index Q) along the chainage of the tunnel, (2) to make prediction of RMR, Q or SE along the chainage of the tunnel from boreholes at the exploration phase and design stage of the tunnel, and (3) to make predictions of SE and RMR or Q ahead of the tunnel’s face during excavation of the tunnel based on SE estimations during excavation. The above tools are the basic constituents of an algorithm to continuously update the geotechnical model of the rock mass based on logged TBM data. Several cases were considered to illustrate the proposed methodology, namely: (a) data from a system of twin tunnels in Hong Kong, (b) data from three tunnels excavated in Northern Italy, and (c) data from the section Singuerlin-Esglesias of the Metro L9 tunnel in Barcelona.

  10. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  11. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    Science.gov (United States)

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour

  12. Children, computer exposure and musculoskeletal outcomes: the development of pathway models for school and home computer-related musculoskeletal outcomes.

    Science.gov (United States)

    Harris, Courtenay; Straker, Leon; Pollock, Clare; Smith, Anne

    2015-01-01

    Children's computer use is rapidly growing, together with reports of related musculoskeletal outcomes. Models and theories of adult-related risk factors demonstrate multivariate risk factors associated with computer use. Children's use of computers is different from adult's computer use at work. This study developed and tested a child-specific model demonstrating multivariate relationships between musculoskeletal outcomes, computer exposure and child factors. Using pathway modelling, factors such as gender, age, television exposure, computer anxiety, sustained attention (flow), socio-economic status and somatic complaints (headache and stomach pain) were found to have effects on children's reports of musculoskeletal symptoms. The potential for children's computer exposure to follow a dose-response relationship was also evident. Developing a child-related model can assist in understanding risk factors for children's computer use and support the development of recommendations to encourage children to use this valuable resource in educational, recreational and communication environments in a safe and productive manner. Computer use is an important part of children's school and home life. Application of this developed model, that encapsulates related risk factors, enables practitioners, researchers, teachers and parents to develop strategies that assist young people to use information technology for school, home and leisure in a safe and productive manner.

  13. Impact of mass generation for spin-1 mediator simplified models

    International Nuclear Information System (INIS)

    Bell, Nicole F.; Cai, Yi; Leane, Rebecca K.

    2017-01-01

    In the simplified dark matter models commonly studied, the mass generation mechanism for the dark fields is not typically specified. We demonstrate that the dark matter interaction types, and hence the annihilation processes relevant for relic density and indirect detection, are strongly dictated by the mass generation mechanism chosen for the dark sector particles, and the requirement of gauge invariance. We focus on the class of models in which fermionic dark matter couples to a spin-1 vector or axial-vector mediator. However, in order to generate dark sector mass terms, it is necessary in most cases to introduce a dark Higgs field and thus a spin-0 scalar mediator will also be present. In the case that all the dark sector fields gain masses via coupling to a single dark sector Higgs field, it is mandatory that the axial-vector coupling of the spin-1 mediator to the dark matter is non-zero; the vector coupling may also be present depending on the charge assignments. For all other mass generation options, only pure vector couplings between the spin-1 mediator and the dark matter are allowed. If these coupling restrictions are not obeyed, unphysical results may be obtained such as a violation of unitarity at high energies. These two-mediator scenarios lead to important phenomenology that does not arise in single mediator models. We survey two-mediator dark matter models which contain both vector and scalar mediators, and explore their relic density and indirect detection phenomenology.

  14. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  15. Constraints on constituent quark masses from potential models

    International Nuclear Information System (INIS)

    Silvestre-Brac, B.

    1998-01-01

    Starting from reasonable hypotheses, the magnetic moments for the baryons are revisited dat the light of general space wave functions. They allow to put very severe bounds on the quark masses as derived from usual potential models. The experimental situation cannot be explained in the framework of such models. (author)

  16. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  17. An algebraic model for quark mass matrices with heavy top

    International Nuclear Information System (INIS)

    Krolikowski, W.; Warsaw Univ.

    1991-01-01

    In terms of an intergeneration U(3) algebra, a numerical model is constructed for quark mass matrices, predicting the top-quark mass around 170 GeV and the CP-violating phase around 75 deg. The CKM matrix is nonsymmetric in moduli with |V ub | being very small. All moduli are consistent with their experimental limits. The model is motivated by the author's previous work on three replicas of the Dirac particle, presumably resulting into three generations of leptons and quarks. The paper may be also viewed as an introduction to a new method of intrinsic dynamical description of lepton and quark mass matrices. (author)

  18. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    Science.gov (United States)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  19. Study on the constitutive model for jointed rock mass.

    Directory of Open Access Journals (Sweden)

    Qiang Xu

    Full Text Available A new elasto-plastic constitutive model for jointed rock mass, which can consider the persistence ratio in different visual angle and anisotropic increase of plastic strain, is proposed. The proposed the yield strength criterion, which is anisotropic, is not only related to friction angle and cohesion of jointed rock masses at the visual angle but also related to the intersection angle between the visual angle and the directions of the principal stresses. Some numerical examples are given to analyze and verify the proposed constitutive model. The results show the proposed constitutive model has high precision to calculate displacement, stress and plastic strain and can be applied in engineering analysis.

  20. CO2 Mass transfer model for carbonic anhydrase-enhanced aqueous MDEA solutions

    DEFF Research Database (Denmark)

    Gladis, Arne Berthold; Deslauriers, Maria Gundersen; Neerup, Randi

    2018-01-01

    In this study a CO2 mass transfer model was developed for carbonic anhydrase-enhanced MDEA solutions based on a mechanistic kinetic enzyme model. Four different enzyme models were compared in their ability to predict the liquid side mass transfer coefficient at temperatures in the range of 298...

  1. Lumped Mass Modeling for Local-Mode-Suppressed Element Connectivity

    DEFF Research Database (Denmark)

    Joung, Young Soo; Yoon, Gil Ho; Kim, Yoon Young

    2005-01-01

    connectivity parameterization (ECP) is employed. On the way to the ultimate crashworthy structure optimization, we are now developing a local mode-free topology optimization formulation that can be implemented in the ECP method. In fact, the local mode-freeing strategy developed here can be also used directly...... experiencing large structural changes, appears to be still poor. In ECP, the nodes of the domain-discretizing elements are connected by zero-length one-dimensional elastic links having varying stiffness. For computational efficiency, every elastic link is now assumed to have two lumped masses at its ends....... Choosing appropriate penalization functions for lumped mass and link stiffness is important for local mode-free results. However, unless the objective and constraint functions are carefully selected, it is difficult to obtain clear black-and-white results. It is shown that the present formulation is also...

  2. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  3. Computational Models for Calcium-Mediated Astrocyte Functions.

    Science.gov (United States)

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  4. The use of conduction model in laser weld profile computation

    Science.gov (United States)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  5. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters

    Directory of Open Access Journals (Sweden)

    Pietro Cipresso

    2017-08-01

    Full Text Available Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  6. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  7. Octet baryon mass splittings from up-down quark mass differences

    Energy Technology Data Exchange (ETDEWEB)

    Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Najjar, J. [Regensburg Univ. (Germany). Institut fuer Theoretische Physik; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Regensburg Univ. (Germany). Institut fuer Theoretische Physik; Juelich Research Centre, Juelich (Germany); Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Div.; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Zanotti, J.M. [Adelaide Univ., SA (Australia). CSSM, School of Chemistry and Physics; Collaboration: QCDSF-UKQCD Collaboration

    2012-12-15

    Using an SU(3) flavour symmetry breaking expansion in the quark mass, we determine the QCD component of the neutron-proton, Sigma and Xi mass splittings of the baryon octet due to updown (and strange) quark mass differences. Provided the average quark mass is kept constant, the expansion coefficients in our procedure can be determined from computationally cheaper simulations with mass degenerate sea quarks and partially quenched valence quarks.

  8. Computer modeling of road bridge for simulation moving load

    Directory of Open Access Journals (Sweden)

    Miličić Ilija M.

    2016-01-01

    Full Text Available In this paper is shown computational modelling one span road structures truss bridge with the roadway on the upper belt of. Calculation models were treated as planar and spatial girders made up of 1D finite elements with applications for CAA: Tower and Bridge Designer 2016 (2nd Edition. The conducted computer simulations results are obtained for each comparison of the impact of moving load according to the recommendations of the two standards SRPS and AASHATO. Therefore, it is a variant of the bridge structure modeling application that provides Bridge Designer 2016 (2nd Edition identical modeled in an environment of Tower. As important information for the selection of a computer applications point out that the application Bridge Designer 2016 (2nd Edition we arent unable to treat the impacts moving load model under national standard - V600. .

  9. Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities

    Science.gov (United States)

    Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian

    2018-01-01

    In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…

  10. Computer-aided modeling for efficient and innovative product-process engineering

    DEFF Research Database (Denmark)

    Heitzig, Martina

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer...... in chemical and biochemical engineering have been solved to illustrate the application of the generic modelling methodology, the computeraided modelling framework and the developed software tool.......-aided methods provide. The key prerequisite of computer-aided productprocess engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging, time-consuming and therefore cost...

  11. Mass distribution of Earth landforms determined by aspects of the geopotential as computed from the global gravity field model EGM 2008

    Czech Academy of Sciences Publication Activity Database

    Kalvoda, J.; Klokočník, Jaroslav; Kostelecký, J.; Bezděk, Aleš

    2013-01-01

    Roč. 48, č. 2 (2013), s. 17-25 ISSN 0300-5402 R&D Projects: GA ČR GA13-36843S Grant - others:GA ČR(EE) GCP209/12/J068; ESA(XE) ESA- PECS project No. C98056 Institutional support: RVO:67985815 Keywords : Earth landforms * gravity field model * mass distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  12. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  13. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    International Nuclear Information System (INIS)

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.

    2009-01-01

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  14. Modeling and experimental verification of proof mass effects on vibration energy harvester performance

    International Nuclear Information System (INIS)

    Kim, Miso; Hoegen, Mathias; Dugundji, John; Wardle, Brian L

    2010-01-01

    An electromechanically coupled model for a cantilevered piezoelectric energy harvester with a proof mass is presented. Proof masses are essential in microscale devices to move device resonances towards optimal frequency points for harvesting. Such devices with proof masses have not been rigorously modeled previously; instead, lumped mass or concentrated point masses at arbitrary points on the beam have been used. Thus, this work focuses on the exact vibration analysis of cantilevered energy harvester devices including a tip proof mass. The model is based not only on a detailed modal analysis, but also on a thorough investigation of damping ratios that can significantly affect device performance. A model with multiple degrees of freedom is developed and then reduced to a single-mode model, yielding convenient closed-form normalized predictions of device performance. In order to verify the analytical model, experimental tests are undertaken on a macroscale, symmetric, bimorph, piezoelectric energy harvester with proof masses of different geometries. The model accurately captures all aspects of the measured response, including the location of peak-power operating points at resonance and anti-resonance, and trends such as the dependence of the maximal power harvested on the frequency. It is observed that even a small change in proof mass geometry results in a substantial change of device performance due not only to the frequency shift, but also to the effect on the strain distribution along the device length. Future work will include the optimal design of devices for various applications, and quantification of the importance of nonlinearities (structural and piezoelectric coupling) for device performance

  15. Coupling of climate models and ice sheet models by surface mass balance gradients: application to the Greenland Ice Sheet

    Directory of Open Access Journals (Sweden)

    M. M. Helsen

    2012-03-01

    Full Text Available It is notoriously difficult to couple surface mass balance (SMB results from climate models to the changing geometry of an ice sheet model. This problem is traditionally avoided by using only accumulation from a climate model, and parameterizing the meltwater run-off as a function of temperature, which is often related to surface elevation (Hs. In this study, we propose a new strategy to calculate SMB, to allow a direct adjustment of SMB to a change in ice sheet topography and/or a change in climate forcing. This method is based on elevational gradients in the SMB field as computed by a regional climate model. Separate linear relations are derived for ablation and accumulation, using pairs of Hs and SMB within a minimum search radius. The continuously adjusting SMB forcing is consistent with climate model forcing fields, also for initially non-glaciated areas in the peripheral areas of an ice sheet. When applied to an asynchronous coupled ice sheet – climate model setup, this method circumvents traditional temperature lapse rate assumptions. Here we apply it to the Greenland Ice Sheet (GrIS. Experiments using both steady-state forcing and glacial-interglacial forcing result in realistic ice sheet reconstructions.

  16. Computational fluid dynamic applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  17. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    Science.gov (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  18. Water-Exit Process Modeling and Added-Mass Calculation of the Submarine-Launched Missile

    Directory of Open Access Journals (Sweden)

    Yang Jian

    2017-11-01

    Full Text Available In the process that the submarine-launched missile exits the water, there is the complex fluid solid coupling phenomenon. Therefore, it is difficult to establish the accurate water-exit dynamic model. In the paper, according to the characteristics of the water-exit motion, based on the traditional method of added mass, considering the added mass changing rate, the water-exit dynamic model is established. And with help of the CFX fluid simulation software, a new calculation method of the added mass that is suit for submarine-launched missile is proposed, which can effectively solve the problem of fluid solid coupling in modeling process. Then by the new calculation method, the change law of the added mass in water-exit process of the missile is obtained. In simulated analysis, for the water-exit process of the missile, by comparing the results of the numerical simulation and the calculation of theoretical model, the effectiveness of the new added mass calculation method and the accuracy of the water-exit dynamic model that considers the added mass changing rate are verified.

  19. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  20. Research of connection between mass audience and new media. Approaches to new model of mass communication measurement

    OpenAIRE

    Sibiriakova Olena Oleksandrivna

    2015-01-01

    In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.