WorldWideScience

Sample records for mass computation model

  1. A Generative Computer Model for Preliminary Design of Mass Housing

    Directory of Open Access Journals (Sweden)

    Ahmet Emre DİNÇER

    2014-05-01

    Full Text Available Today, we live in what we call the “Information Age”, an age in which information technologies are constantly being renewed and developed. Out of this has emerged a new approach called “Computational Design” or “Digital Design”. In addition to significantly influencing all fields of engineering, this approach has come to play a similar role in all stages of the design process in the architectural field. In providing solutions for analytical problems in design such as cost estimate, circulation systems evaluation and environmental effects, which are similar to engineering problems, this approach is being used in the evaluation, representation and presentation of traditionally designed buildings. With developments in software and hardware technology, it has evolved as the studies based on design of architectural products and production implementations with digital tools used for preliminary design stages. This paper presents a digital model which may be used in the preliminary stage of mass housing design with Cellular Automata, one of generative design systems based on computational design approaches. This computational model, developed by scripts of 3Ds Max software, has been implemented on a site plan design of mass housing, floor plan organizations made by user preferences and facade designs. By using the developed computer model, many alternative housing types could be rapidly produced. The interactive design tool of this computational model allows the user to transfer dimensional and functional housing preferences by means of the interface prepared for model. The results of the study are discussed in the light of innovative architectural approaches.

  2. Computational Modelling of the Structural Integrity following Mass-Loss in Polymeric Charred Cellular Solids

    OpenAIRE

    J. P. M. Whitty; J. Francis; J. Howe; B. Henderson

    2014-01-01

    A novel computational technique is presented for embedding mass-loss due to burning into the ANSYS finite element modelling code. The approaches employ a range of computational modelling methods in order to provide more complete theoretical treatment of thermoelasticity absent from the literature for over six decades. Techniques are employed to evaluate structural integrity (namely, elastic moduli, Poisson’s ratios, and compressive brittle strength) of honeycomb systems known to approximate t...

  3. A computational model to generate simulated three-dimensional breast masses

    Energy Technology Data Exchange (ETDEWEB)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N., E-mail: wernick@iit.edu [Medical Imaging Research Center, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois 60616 (United States); Schmidt, Robert A. [Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Nishikawa, Robert M. [Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213 (United States)

    2015-02-15

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  4. A computational model to generate simulated three-dimensional breast masses

    International Nuclear Information System (INIS)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N.; Schmidt, Robert A.; Nishikawa, Robert M.

    2015-01-01

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  5. Structural characterisation of medically relevant protein assemblies by integrating mass spectrometry with computational modelling.

    Science.gov (United States)

    Politis, Argyris; Schmidt, Carla

    2018-03-20

    Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    Science.gov (United States)

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  7. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    Directory of Open Access Journals (Sweden)

    Karl T Bates

    Full Text Available Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize

  8. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  9. Computer programs for the numerical modelling of water flow in rock masses

    International Nuclear Information System (INIS)

    Croney, P.; Richards, L.R.

    1985-08-01

    Water flow in rock joints provides a very important possible route for the migration of radio-nuclides from radio-active waste within a repository back to the biosphere. Two computer programs DAPHNE and FPM have been developed to model two dimensional fluid flow in jointed rock masses. They have been developed to run on microcomputer systems suitable for field locations. The fluid flows in a number of jointed rock systems have been examined and certain controlling functions identified. A methodology has been developed for assessing the anisotropic permeability of jointed rock. A number of examples of unconfined flow into surface and underground openings have been analysed and ground water lowering, pore water pressures and flow quantities predicted. (author)

  10. Heat and mass transfer during the cryopreservation of a bioartificial liver device: a computational model.

    Science.gov (United States)

    Balasubramanian, Saravana K; Coger, Robin N

    2005-01-01

    Bioartificial liver devices (BALs) have proven to be an effective bridge to transplantation for cases of acute liver failure. Enabling the long-term storage of these devices using a method such as cryopreservation will ensure their easy off the shelf availability. To date, cryopreservation of liver cells has been attempted for both single cells and sandwich cultures. This study presents the potential of using computational modeling to help develop a cryopreservation protocol for storing the three dimensional BAL: Hepatassist. The focus is upon determining the thermal and concentration profiles as the BAL is cooled from 37 degrees C-100 degrees C, and is completed in two steps: a cryoprotectant loading step and a phase change step. The results indicate that, for the loading step, mass transfer controls the duration of the protocol, whereas for the phase change step, when mass transfer is assumed negligible, the latent heat released during freezing is the control factor. The cryoprotocol that is ultimately proposed considers time, cooling rate, and the temperature gradients that the cellular space is exposed to during cooling. To our knowledge, this study is the first reported effort toward designing an effective protocol for the cryopreservation of a three-dimensional BAL device.

  11. Computing Mass Properties From AutoCAD

    Science.gov (United States)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  12. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    Science.gov (United States)

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  13. On the modelling of turbulent heat and mass transfer for the computation of buoyancy affected flows

    International Nuclear Information System (INIS)

    Viollet, P.-L.

    1981-02-01

    The k - epsilon eddy viscosity turbulence model is applied to simple test cases of buoyant flows. Vertical as horizontal stable flows are nearly well represented by the computation, and in unstable flows the mixing is underpredicted. The general agreement is good enough for allowing application to thermal-fluid engineering problems

  14. Modelling of Mass Transfer Phenomena in Chemical and Biochemical Reactor Systems using Computational Fluid Dynamics

    DEFF Research Database (Denmark)

    Larsson, Hilde Kristina

    the velocity and pressure distributions in a fluid. CFD also enables the modelling of several fluids simultaneously, e.g. gas bubbles in a liquid, as well as the presence of turbulence and dissolved chemicals in a fluid, and many other phenomena. This makes CFD an appreciated tool for studying flow structures......, mixing, and other mass transfer phenomena in chemical and biochemical reactor systems. In this project, four selected case studies are investigated in order to explore the capabilities of CFD. The selected cases are a 1 ml stirred microbioreactor, an 8 ml magnetically stirred reactor, a Rushton impeller...... and an ion-exchange reaction are also modelled and compared to experimental data. The thesis includes a comprehensive overview of the fundamentals behind a CFD software, as well as a more detailed review of the fluid dynamic phenomena investigated in this project. The momentum and continuity equations...

  15. Computation of the velocity field and mass balance in the finite-element modeling of groundwater flow

    International Nuclear Information System (INIS)

    Yeh, G.T.

    1980-01-01

    Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem

  16. PORFLO - a continuum model for fluid flow, heat transfer, and mass transport in porous media. Model theory, numerical methods, and computational tests

    International Nuclear Information System (INIS)

    Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.

    1985-09-01

    Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs

  17. The impact of mass gatherings and holiday traveling on the course of an influenza pandemic: a computational model.

    Science.gov (United States)

    Shi, Pengyi; Keskinocak, Pinar; Swann, Julie L; Lee, Bruce Y

    2010-12-21

    During the 2009 H1N1 influenza pandemic, concerns arose about the potential negative effects of mass public gatherings and travel on the course of the pandemic. Better understanding the potential effects of temporal changes in social mixing patterns could help public officials determine if and when to cancel large public gatherings or enforce regional travel restrictions, advisories, or surveillance during an epidemic. We develop a computer simulation model using detailed data from the state of Georgia to explore how various changes in social mixing and contact patterns, representing mass gatherings and holiday traveling, may affect the course of an influenza pandemic. Various scenarios with different combinations of the length of the mass gatherings or traveling period (range: 0.5 to 5 days), the proportion of the population attending the mass gathering events or on travel (range: 1% to 50%), and the initial reproduction numbers R0 (1.3, 1.5, 1.8) are explored. Mass gatherings that occur within 10 days before the epidemic peak can result in as high as a 10% relative increase in the peak prevalence and the total attack rate, and may have even worse impacts on local communities and travelers' families. Holiday traveling can lead to a second epidemic peak under certain scenarios. Conversely, mass traveling or gatherings may have little effect when occurring much earlier or later than the epidemic peak, e.g., more than 40 days earlier or 20 days later than the peak when the initial R0 = 1.5. Our results suggest that monitoring, postponing, or cancelling large public gatherings may be warranted close to the epidemic peak but not earlier or later during the epidemic. Influenza activity should also be closely monitored for a potential second peak if holiday traveling occurs when prevalence is high.

  18. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  19. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    Science.gov (United States)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  20. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    International Nuclear Information System (INIS)

    Zhang, Jing; Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  1. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  2. Computed tomography of mediastinal masses

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Seong Tae; Lee, Jae Mun; Bahk, Yong Whee; Kim, Choon Yul [Catholic Medical College, Seoul (Korea, Republic of)

    1984-09-15

    The ability of CT scanning of the mediastinum to distinguish specific tissue densities and to display in a transverse plane often provides unique diagnostic information unobtainable with conventional radiographic methods. We retrospectively analyzed the CT findings of 20 cases of proven mediastinal masses at the Department of Radiology, St. Mary Hospital , Catholic Medical College from February 1982 to June 1984. CT scans were performed with a Siemens Somatom 2 scanner. The technical factors involved were tube voltage 125 kVp, exposure time 5 seconds, 230 mAs, 256 X 256 matrices, and pixel size 1.3 mm. 8 mm slices were obtained at 1 cm interval from the apex of the lung to the diaphragm. If necessary, additional scans at 5 mm interval or magnify scans were obtained. After pre-contrast scan, contrast scans were routinely taken with rapid drip-infusion of contrast media (60% Convey, 150 cc). The results obtained were as follows; 1. Among 20 cases, 11 were tumors, 4 infectious masses and 5 aneurysms of great vessels, tortuous brachiocephalic artery and pericardial fat pad. In each case CT showed accurate location, extent, and nature of the masses. 2. Solid tumors were thymic hyperplasias, thymoma, thymus carcinoid, neurilemmoma and germ cell tumors (seminoma, embryonal cell carcinoma). Internal architecture was homogeneous in thymoma, thymus carcinoid, neurilemmoma, seminoma but inhomogeneous in thymic hyperplasias and embryonal cell carcinoma. CT numbers ranged from 16 to 49 HU and were variably enhanced. 3. Cystic tumors consisted of teratomas, cystic hygroma, and neurilemmoma. Teratomas contained calcium and fat, inhomogeneous mass with strongly enhancing wall. Cystic hygroma was nonenhancing mass with HU of 20. 4. All of germ cell tumors (2 teratomas and one each of seminoma and embryonal cell carcinoma) and one of 2 thymic hyperplasias had calcium deposit. 5. Tuberculous lymphadenopathies presented as a mass in the retrocaval pretracheal space and hilar region

  3. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  4. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents.

    Science.gov (United States)

    Zhang, Jing; Lo, Joseph Y; Kuzmiak, Cherie M; Ghate, Sujata V; Yoon, Sora C; Mazurowski, Maciej A

    2014-09-01

    Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (perror

  5. Introduction to computational mass transfer with applications to chemical engineering

    CERN Document Server

    Yu, Kuo-Tsung

    2017-01-01

    This book offers an easy-to-understand introduction to the computational mass transfer (CMT) method. On the basis of the contents of the first edition, this new edition is characterized by the following additional materials. It describes the successful application of this method to the simulation of the mass transfer process in a fluidized bed, as well as recent investigations and computing methods for predictions for the multi-component mass transfer process. It also demonstrates the general issues concerning computational methods for simulating the mass transfer of the rising bubble process. This new edition has been reorganized by moving the preparatory materials for Computational Fluid Dynamics (CFD) and Computational Heat Transfer into appendices, additions of new chapters, and including three new appendices on, respectively, generalized representation of the two-equation model for the CMT, derivation of the equilibrium distribution function in the lattice-Boltzmann method, and derivation of the Navier-S...

  6. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  7. Minimalistic Neutrino Mass Model

    CERN Document Server

    De Gouvêa, A; Gouvea, Andre de

    2001-01-01

    We consider the simplest model which solves the solar and atmospheric neutrino puzzles, in the sense that it contains the smallest amount of beyond the Standard Model ingredients. The solar neutrino data is accounted for by Planck-mass effects while the atmospheric neutrino anomaly is due to the existence of a single right-handed neutrino at an intermediate mass scale between 10^9 GeV and 10^14 GeV. Even though the neutrino mixing angles are not exactly predicted, they can be naturally large, which agrees well with the current experimental situation. Furthermore, the amount of lepton asymmetry produced in the early universe by the decay of the right-handed neutrino is very predictive and may be enough to explain the current baryon-to-photon ratio if the right-handed neutrinos are produced out of thermal equilibrium. One definitive test for the model is the search for anomalous seasonal effects at Borexino.

  8. Introduction to computational mass transfer with applications to chemical engineering

    CERN Document Server

    Yu, Kuo-Tsong

    2014-01-01

    This book presents a new computational methodology called Computational Mass Transfer (CMT). It offers an approach to rigorously simulating the mass, heat and momentum transfer under turbulent flow conditions with the help of two newly published models, namely the C’2—εC’ model and the Reynolds  mass flux model, especially with regard to predictions of concentration, temperature and velocity distributions in chemical and related processes. The book will also allow readers to understand the interfacial phenomena accompanying the mass transfer process and methods for modeling the interfacial effect, such as the influences of Marangoni convection and Rayleigh convection. The CMT methodology is demonstrated by means of its applications to typical separation and chemical reaction processes and equipment, including distillation, absorption, adsorption and chemical reactors. Professor Kuo-Tsong Yu is a Member of the Chinese Academy of Sciences. Dr. Xigang Yuan is a Professor at the School of Chemical Engine...

  9. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  10. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    Science.gov (United States)

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  11. Computational mass spectrometry for small molecules

    Science.gov (United States)

    2013-01-01

    The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline. PMID:23453222

  12. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  13. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  14. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  15. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  16. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  17. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  18. Hydrogen/deuterium exchange mass spectrometry and computational modeling reveal a discontinuous epitope of an antibody/TL1A Interaction.

    Science.gov (United States)

    Huang, Richard Y-C; Krystek, Stanley R; Felix, Nathan; Graziano, Robert F; Srinivasan, Mohan; Pashine, Achal; Chen, Guodong

    2018-01-01

    TL1A, a tumor necrosis factor-like cytokine, is a ligand for the death domain receptor DR3. TL1A, upon binding to DR3, can stimulate lymphocytes and trigger secretion of proinflammatory cytokines. Therefore, blockade of TL1A/DR3 interaction may be a potential therapeutic strategy for autoimmune and inflammatory diseases. Recently, the anti-TL1A monoclonal antibody 1 (mAb1) with a strong potency in blocking the TL1A/DR3 interaction was identified. Here, we report on the use of hydrogen/deuterium exchange mass spectrometry (HDX-MS) to obtain molecular-level details of mAb1's binding epitope on TL1A. HDX coupled with electron-transfer dissociation MS provided residue-level epitope information. The HDX dataset, in combination with solvent accessible surface area (SASA) analysis and computational modeling, revealed a discontinuous epitope within the predicted interaction interface of TL1A and DR3. The epitope regions span a distance within the approximate size of the variable domains of mAb1's heavy and light chains, indicating it uses a unique mechanism of action to block the TL1A/DR3 interaction.

  19. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...

  20. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  1. Neutrino Mass and Flavour Models

    International Nuclear Information System (INIS)

    King, Stephen F.

    2010-01-01

    We survey some of the recent promising developments in the search for the theory behind neutrino mass and tri-bimaximal mixing, and indeed all fermion masses and mixing. We focus in particular on models with discrete family symmetry and unification, and show how such models can also solve the SUSY flavour and CP problems. We also discuss the theoretical implications of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.

  2. Standard Model mass spectrum in inflationary universe

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2017-04-11

    We work out the Standard Model (SM) mass spectrum during inflation with quantum corrections, and explore its observable consequences in the squeezed limit of non-Gaussianity. Both non-Higgs and Higgs inflation models are studied in detail. We also illustrate how some inflationary loop diagrams can be computed neatly by Wick-rotating the inflation background to Euclidean signature and by dimensional regularization.

  3. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  4. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  5. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  6. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  7. Mass distribution of Earth landforms determined by aspects of the geopotential as computed from the global gravity field model EGM 2008

    Czech Academy of Sciences Publication Activity Database

    Kalvoda, J.; Klokočník, Jaroslav; Kostelecký, J.; Bezděk, Aleš

    2013-01-01

    Roč. 48, č. 2 (2013), s. 17-25 ISSN 0300-5402 R&D Projects: GA ČR GA13-36843S Grant - others:GA ČR(EE) GCP209/12/J068; ESA(XE) ESA- PECS project No. C98056 Institutional support: RVO:67985815 Keywords : Earth landforms * gravity field model * mass distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  8. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  9. Schwinger Model Mass Anomalous Dimension

    CERN Document Server

    Keegan, Liam

    2016-06-20

    The mass anomalous dimension for several gauge theories with an infrared fixed point has recently been determined using the mode number of the Dirac operator. In order to better understand the sources of systematic error in this method, we apply it to a simpler model, the massive Schwinger model with two flavours of fermions, where analytical results are available for comparison with the lattice data.

  10. Computed tomography of pediatric abdominal masses

    Energy Technology Data Exchange (ETDEWEB)

    Kook, Shin Ho; Ko, Eun Joo; Chung, Eun Chul; Suh, Jung Soo; Rhee, Chung Sik [College of Medicine, Ewha Womans University, Seoul (Korea, Republic of)

    1988-02-15

    Ultrasonography is a very useful diagnostic modality for evaluation of the pediatric abdominal masses, due to faster, cheaper, and no radiation hazard than CT. But CT has more advantages in assessing precise anatomic location, and extent of the pathologic process, and also has particular value in defining the size, relation of the mass to surrounding organs and detection of lymphadenopathy. We analyzed CT features of 35 cases of pathologically proven pediatric abdominal masses for recent 2 years at Ewha Woman's University Hospital. The results were as follows: 1.The most common originating site was kidney (20 cases, 57.1%); followed by gastrointestinal (5 cases, 14.3%), nonrenal retroperitoneal (4 cases, 11.4%), hepatobiliary (3 cases, 8.6%), and genital (3 cases, 8.6%) in order of frequency. 2.The most common mass was hydronephrosis (11 cases, 31.4%), Wilms' tumor (7 cases, 20.0%), neuroblastoma, choledochal cyst, periappendiceal abscess (3 cases, 8.6%, respectively), ovarian cyst (2 cases, 5.7%) were next in order of frequency. 3.Male to female ratio was 4:5 and choledochal cyst and ovarian cyst were found only in females. The most prevalent age group was 1-3 year old (12 cases, 34.3%). 4.With CT, the diagnosis of hydronephrosis was easy in all cases and could evaluate of its severity, renal function and obstruction site with high accuracy. 5.Wilms' tumor and neuroblastoma were relatively well differentiated by their characteristic CT features; such as location, shape, margin, middle cross, calyceal appearance and calcification, etc. 6.Ovarian and mensentric cysts had similar CT appearance. 7.In other pediatric abdominal masses, CT provided excellent information about anatomic detail, precise extent of tumor and differential diagnostic findings. So, CT is useful imaging modality for the demonstration and diagnosis of abdominal mass lesions in pediatric patients.

  11. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  12. Mass generation in composite models

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1985-10-01

    I discuss aspects of composite models of quarks and leptons connected with the dynamics of how these fermions acquire mass. Several issues related to the protection mechanisms necessary to keep quarks and leptons light are illustrated by means of concrete examples and a critical overview of suggestions for family replications is given. Some old and new ideas of how one may actually be able to generate small quark and lepton masses are examined, along with some of the difficulties they encounter in practice. (orig.)

  13. Computer Aided Detection of Breast Masses in Digital Tomosynthesis

    National Research Council Canada - National Science Library

    Singh, Swatee; Lo, Joseph

    2008-01-01

    The purpose of this study was to investigate feasibility of computer-aided detection of masses and calcification clusters in breast tomosynthesis images and obtain reliable estimates of sensitivity...

  14. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  15. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  16. Isotopic analysis of plutonium by computer controlled mass spectrometry

    International Nuclear Information System (INIS)

    1974-01-01

    Isotopic analysis of plutonium chemically purified by ion exchange is achieved using a thermal ionization mass spectrometer. Data acquisition from and control of the instrument is done automatically with a dedicated system computer in real time with subsequent automatic data reduction and reporting. Separation of isotopes is achieved by varying the ion accelerating high voltage with accurate computer control

  17. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  18. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  19. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  20. Vehicle - Bridge interaction, comparison of two computing models

    Science.gov (United States)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  1. Critical assessment of nuclear mass models

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1992-01-01

    Some of the physical assumptions underlying various nuclear mass models are discussed. The ability of different mass models to predict new masses that were not taken into account when the models were formulated and their parameters determined is analyzed. The models are also compared with respect to their ability to describe nuclear-structure properties in general. The analysis suggests future directions for mass-model development

  2. Rapid freeze-drying cycle optimization using computer programs developed based on heat and mass transfer models and facilitated by tunable diode laser absorption spectroscopy (TDLAS).

    Science.gov (United States)

    Kuu, Wei Y; Nail, Steven L

    2009-09-01

    Computer programs in FORTRAN were developed to rapidly determine the optimal shelf temperature, T(f), and chamber pressure, P(c), to achieve the shortest primary drying time. The constraint for the optimization is to ensure that the product temperature profile, T(b), is below the target temperature, T(target). Five percent mannitol was chosen as the model formulation. After obtaining the optimal sets of T(f) and P(c), each cycle was assigned with a cycle rank number in terms of the length of drying time. Further optimization was achieved by dividing the drying time into a series of ramping steps for T(f), in a cascading manner (termed the cascading T(f) cycle), to further shorten the cycle time. For the purpose of demonstrating the validity of the optimized T(f) and P(c), four cycles with different predicted lengths of drying time, along with the cascading T(f) cycle, were chosen for experimental cycle runs. Tunable diode laser absorption spectroscopy (TDLAS) was used to continuously measure the sublimation rate. As predicted, maximum product temperatures were controlled slightly below the target temperature of -25 degrees C, and the cascading T(f)-ramping cycle is the most efficient cycle design. In addition, the experimental cycle rank order closely matches with that determined by modeling.

  3. Many Masses on One Stroke:. Economic Computation of Quark Propagators

    Science.gov (United States)

    Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus

    The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.

  4. Computing K and D meson masses with Nf=2+1+1 twisted mass lattice QCD

    International Nuclear Information System (INIS)

    Baron, Remi; Blossier, Benoit; Boucaud, Philippe

    2010-05-01

    We discuss the computation of the mass of the K and D mesons within the framework of N f =2+1+1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and charm quark masses to their physical values. In particular, we address the problems related to the twisted mass flavor and parity symmetry breaking, which arise when considering a non-degenerate (c,s) doublet. We propose and verify the consistency of three methods to extract the K and D meson masses in this framework. (orig.)

  5. Generalized added masses computation for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1983-01-01

    The aim of this paper a description of a method to simulate the dynamic effect of a fluid between two structures by means of an added mass and an added stiffness. The method is based on a potential theory which assumes the fluid is inviscid and incompressible (the case of compressibility is discussed); a solution of the corresponding field equation is given as a superposition of elementary conditions (i.e. applicable to elementary boundary conditions). Consequently the pressure and displacements of the fluid on the boundary are given as a function of the series coefficients; the ''work lost'' (i.e. the work done by the pressures on the difference between actual and estimated displacements) is minimized, in this way the expansion coefficients are related to the displacements on the boundaries. Virtual work procedures are then used to compute added masses. The particular case of a free surface (with gravity effects) is discussed, it is shown how the effect can be modelled by means of an added stiffness term. Some examples relative to vibrations in reservoirs are given and discussed. (orig.)

  6. RFQ modeling computer program

    International Nuclear Information System (INIS)

    Potter, J.M.

    1985-01-01

    The mathematical background for a multiport-network-solving program is described. A method for accurately numerically modeling an arbitrary, continuous, multiport transmission line is discussed. A modification to the transmission-line equations to accommodate multiple rf drives is presented. An improved model for the radio-frequency quadrupole (RFQ) accelerator that corrects previous errors is given. This model permits treating the RFQ as a true eight-port network for simplicity in interpreting the field distribution and ensures that all modes propagate at the same velocity in the high-frequency limit. The flexibility of the multiport model is illustrated by simple modifications to otherwise two-dimensional systems that permit modeling them as linear chains of multiport networks

  7. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  8. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  9. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  10. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Chaos is one of the major scientific discoveries of our times. In fact many scientists ... But there are other natural phenomena that are not predictable though ... characteristics of chaos. ... The position and velocity are all that are needed to determine the motion of a .... a system of equations that modelled the earth's weather ...

  11. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  12. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  13. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  14. The Teaching of Mass Communication Through the use of Computer ...

    African Journals Online (AJOL)

    Mass communication as a programme in education is an important subject in the training of students. Here, we determined the effects of improving the teaching of the subject in a tertiary institution like Cross River University of Technology through the use of computer assisted picture presentation. The study was ...

  15. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  16. Trust models in ubiquitous computing.

    Science.gov (United States)

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  17. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  18. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  19. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  20. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  1. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  2. MASS CUSTOMIZATION and PRODUCT MODELS

    DEFF Research Database (Denmark)

    Svensson, Carsten; Malis, Martin

    2003-01-01

    to the product. Through the application of a mass customization strategy, companies have a unique opportunity to create increased customer satisfaction. In a customized production, knowledge and information have to be easily accessible since every product is a unique combination of information. If the dream...... of a customized alternative instead of a uniform mass-produced product shall become a reality, then the cross-organizational efficiency must be kept at a competitive level. This is the real challenge for mass customization. A radical restructuring of both the internal and the external knowledge management systems...

  3. Improved methods for computing masses from numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  4. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  5. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  6. Anatomy of Higgs mass in supersymmetric inverse seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Eung Jin, E-mail: ejchun@kias.re.kr [Korea Institute for Advanced Study, Seoul 130-722 (Korea, Republic of); Mummidi, V. Suryanarayana, E-mail: soori9@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India); Vempati, Sudhir K., E-mail: vempati@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-09-07

    We compute the one loop corrections to the CP-even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of O(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmetry breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops.

  7. Diagnosis of masses presenting within the ventricles on computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kendall, B.; Reider-Grosswasser, I.; Valentine, A.

    1983-04-01

    The radiological and clinical features of 90 histologically verified intraventricular masses were reviewed. Computed tomography (CT) and plain X-rays were available in all and angiograms in over half the cases. The localisation, effects on the adjacent brain substance and the presence and degree of hydrocephalus was evident on CT. Two-thirds of colloid cysts presented as pathognomonic anterior third ventricular hyperdense masses and the other third were isodense; an alternative diagnosis should be considered for low density masses in this situation. Plexus papillomas and carcinomas mainly involved the trigone nd body of a lateal ventricle of young children and caused asymmetrical hydrocephalus; the third ventricle was occasionally affected also in children and the fourth ventricle more frequently and usually in adults. Two-thirds were hyper and one-third of mixed or lower density. The meningiomas were dense trigonal tumours of adults generally arising in the choroid plexus, but two tentorial meningiomas passed through the choroidal fissure and caused a predominantly intraventricular mass. Gliomas frequently thickened the septum and generally involved the frontal segments of the lateral ventricles. They may be supplied by perforating as well as by the choroidal arteries, which supply most other vascularised masses within the ventricles. Only 10% of our cases did not fall into one of the former categories; these included low density non-enhancing dermoid or epidermoid tumours and higher density enhancing metastatic or angiomatous masses.

  8. Getting computer models to communicate

    International Nuclear Information System (INIS)

    Caremoli, Ch.; Erhard, P.

    1999-01-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  9. Computational Modeling in Liver Surgery

    Directory of Open Access Journals (Sweden)

    Bruno Christ

    2017-11-01

    Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.

  10. Masses in the Weinberg-Salam model

    International Nuclear Information System (INIS)

    Flores, F.A.

    1984-01-01

    This thesis is a detailed discussion of the currently existing limits on the masses of Higgs scalars and fermions in the Weinberg-Salam model. The spontaneous breaking of the gauge symmetry of the model generates arbitrary masses for Higgs scalars and fermions, which for the known fermions have to be set to their experimentally known values. In this thesis, the authors discuss in detail both the theoretical and experimental constraints on these otherwise arbitrary masses

  11. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  12. Cosmic logic: a computational model

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps

  13. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  14. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  15. Double beta decay and neutrino mass models

    Energy Technology Data Exchange (ETDEWEB)

    Helo, J.C. [Universidad Técnica Federico Santa María, Centro-Científico-Tecnológico de Valparaíso, Casilla 110-V, Valparaíso (Chile); Hirsch, M. [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Ota, T. [Department of Physics, Saitama University, Shimo-Okubo 255, 338-8570 Saitama-Sakura (Japan); Santos, F.A. Pereira dos [Departamento de Física, Pontifícia Universidade Católica do Rio de Janeiro,Rua Marquês de São Vicente 225, 22451-900 Gávea, Rio de Janeiro (Brazil)

    2015-05-19

    Neutrinoless double beta decay allows to constrain lepton number violating extensions of the standard model. If neutrinos are Majorana particles, the mass mechanism will always contribute to the decay rate, however, it is not a priori guaranteed to be the dominant contribution in all models. Here, we discuss whether the mass mechanism dominates or not from the theory point of view. We classify all possible (scalar-mediated) short-range contributions to the decay rate according to the loop level, at which the corresponding models will generate Majorana neutrino masses, and discuss the expected relative size of the different contributions to the decay rate in each class. Our discussion is general for models based on the SM group but does not cover models with an extended gauge. We also work out the phenomenology of one concrete 2-loop model in which both, mass mechanism and short-range diagram, might lead to competitive contributions, in some detail.

  16. A review of Higgs mass calculations in supersymmetric models

    DEFF Research Database (Denmark)

    Draper, P.; Rzehak, H.

    2016-01-01

    The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...

  17. Computer tomographic and angiographic studies of histologically confirmed intrahepatic masses

    International Nuclear Information System (INIS)

    Janson, R.; Lackner, K.; Paquet, K.J.; Thelen, M.; Thurn, P.

    1980-01-01

    The computer tomographic and angiographic findings in 53 patients with intrahepatic masses were compared. The histological findings show that 17 were due to echinococcus, 12 were due to hepatic carcinoma, ten were metastases, five patients had focal nodular hyperplasia, three an alveolar echinococcus and there were three cases with an haemangioma of the liver and a further three liver abscesses. Computer tomography proved superior in peripherally situated lesions, and in those in the left lobe of the liver. Arteriography was better at demonstrating lesions below 2 cm in size, particularly vascular tumours. As a pre-operative measure, angiography is to be preferred since it is able to demonstrate anatomic anomalies and variations in the blood supply, as well as invasion of the portal vein or of the inferior vena cava. (orig.) [de

  18. Computer tomographic and angiographic studies of histologically confirmed intrahepatic masses

    Energy Technology Data Exchange (ETDEWEB)

    Janson, R.; Lackner, K.; Paquet, K.J.; Thelen, M.; Thurn, P.

    1980-06-01

    The computer tomographic and angiographic findings in 53 patients with intrahepatic masses were compared. The histological findings show that 17 were due to echinococcus, 12 were due to hepatic carcinoma, ten were metastases, five patients had focal nodular hyperplasia, three an alveolar echinococcus and there were three cases with an haemangioma of the liver and a further three liver abscesses. Computer tomography proved superior in peripherally situated lesions, and in those in the left lobe of the liver. Arteriography was better at demonstrating lesions below 2 cm in size, particularly vascular tumours. As a pre-operative measure, angiography is to be preferred since it is able to demonstrate anatomic anomalies and variations in the blood supply, as well as invasion of the portal vein or of the inferior vena cava.

  19. Mass models for disk and halo components in spiral galaxies

    International Nuclear Information System (INIS)

    Athanassoula, E.; Bosma, A.

    1987-01-01

    The mass distribution in spiral galaxies is investigated by means of numerical simulations, summarizing the results reported by Athanassoula et al. (1986). Details of the modeling technique employed are given, including bulge-disk decomposition; computation of bulge and disk rotation curves (assuming constant mass/light ratios for each); and determination (for spherical symmetry) of the total halo mass out to the optical radius, the concentration indices, the halo-density power law, the core radius, the central density, and the velocity dispersion. Also discussed are the procedures for incorporating galactic gas and checking the spiral structure extent. It is found that structural constraints limit disk mass/light ratios to a range of 0.3 dex, and that the most likely models are maximum-disk models with m = 1 disturbances inhibited. 19 references

  20. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  1. Relativistic mean-field mass models

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Arteaga, D.; Goriely, S.; Chamel, N. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2016-10-15

    We present a new effort to develop viable mass models within the relativistic mean-field approach with density-dependent meson couplings, separable pairing and microscopic estimations for the translational and rotational correction energies. Two interactions, DD-MEB1 and DD-MEB2, are fitted to essentially all experimental masses, and also to charge radii and infinite nuclear matter properties as determined by microscopic models using realistic interactions. While DD-MEB1 includes the σ, ω and ρ meson fields, DD-MEB2 also considers the δ meson. Both mass models describe the 2353 experimental masses with a root mean square deviation of about 1.1 MeV and the 882 measured charge radii with a root mean square deviation of 0.029 fm. In addition, we show that the Pb isotopic shifts and moments of inertia are rather well reproduced, and the equation of state in pure neutron matter as well as symmetric nuclear matter are in relatively good agreement with existing realistic calculations. Both models predict a maximum neutron-star mass of more than 2.6 solar masses, and thus are able to accommodate the heaviest neutron stars observed so far. However, the new Lagrangians, like all previously determined RMF models, present the drawback of being characterized by a low effective mass, which leads to strong shell effects due to the strong coupling between the spin-orbit splitting and the effective mass. Complete mass tables have been generated and a comparison with other mass models is presented. (orig.)

  2. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  3. Reconsideration of mass-distribution models

    Directory of Open Access Journals (Sweden)

    Ninković S.

    2014-01-01

    Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems

  4. Baryons electromagnetic mass splittings in potential models

    International Nuclear Information System (INIS)

    Genovese, M.; Richard, J.-M.; Silvestre-Brac, B.; Varga, K.

    1998-01-01

    We study electromagnetic mass splittings of charmed baryons. We point out discrepancies among theoretical predictions in non-relativistic potential models; none of these predictions seems supported by experimental data. A new calculation is presented

  5. Nucleon structure by Lattice QCD computations with twisted mass fermions

    International Nuclear Information System (INIS)

    Harraud, P.A.

    2010-11-01

    Understanding the structure of the nucleon from quantum chromodynamics (QCD) is one of the greatest challenges of hadronic physics. Only lattice QCD allows to determine numerically the values of the observables from ab-initio principles. This thesis aims to study the nucleon form factors and the first moments of partons distribution functions by using a discretized action with twisted mass fermions. As main advantage, the discretization effects are suppressed at first order in the lattice spacing. In addition, the set of simulations allows a good control of the systematical errors. After reviewing the computation techniques, the results obtained for a wide range of parameters are presented, with lattice spacings varying from 0.0056 fm to 0.089 fm, spatial volumes from 2.1 up to 2.7 fm and several pion masses in the range of 260-470 MeV. The vector renormalization constant was determined in the nucleon sector with improved precision. Concerning the electric charge radius, we found a finite volume effect that provides a key towards an explanation of the chiral dependence of the physical point. The results for the magnetic moment, the axial charge, the magnetic and axial charge radii, the momentum and spin fractions carried by the quarks show no dependence on the lattice spacing nor volume. In our range of pion masses, their values show a deviation from the experimental values. Their chiral behaviour do not exhibit the curvature predicted by the chiral perturbation theory which could explain the apparent discrepancy. (author)

  6. ICADx: interpretable computer aided diagnosis of breast masses

    Science.gov (United States)

    Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man

    2018-02-01

    In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.

  7. Finite element model for heat conduction in jointed rock masses

    International Nuclear Information System (INIS)

    Gartling, D.K.; Thomas, R.K.

    1981-01-01

    A computatonal procedure for simulating heat conduction in a fractured rock mass is proposed and illustrated in the present paper. The method makes use of a simple local model for conduction in the vicinity of a single open fracture. The distributions of fractures and fracture properties within the finite element model are based on a statistical representation of geologic field data. Fracture behavior is included in the finite element computation by locating local, discrete fractures at the element integration points

  8. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  9. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  10. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  11. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  12. Improved mammographic interpretation of masses using computer-aided diagnosis

    International Nuclear Information System (INIS)

    Leichter, I.; Fields, S.; Novak, B.; Nirel, R.; Bamberger, P.; Lederman, R.; Buchbinder, S.

    2000-01-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p z ) increased significantly (p z for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that of the unassisted radiologist. Our findings suggest that objective quantitative features extracted from digitized mammographic findings may help in differentiating between benign and malignant masses, and can assist the radiologist in the interpretation of mass lesions. (orig.)

  13. Computer modeling of liquid crystals

    International Nuclear Information System (INIS)

    Al-Barwani, M.S.

    1999-01-01

    In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)

  14. Relating masses and mixing angles. A model-independent model

    Energy Technology Data Exchange (ETDEWEB)

    Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)

    2016-07-01

    In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.

  15. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  16. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  17. Computer simulation of cascade damage in iron: PKA mass effects

    International Nuclear Information System (INIS)

    Calder, A.; Bacon, D.J.; Barashev, A.; Osetsky, Y.

    2007-01-01

    Full text of publication follows: Results are presented from an extensive series of computer simulations of the damage created by displacement cascades in alpha-iron. The objective has been to determine for the first time the effect of the mass of the primary knock-on atom (PKA) on defect number, defect clustering and cluster morphology. Cascades with PKA energy in the range 5 to 20 keV have been simulated by molecular dynamics for temperature up to 600 K using an interatomic potential for iron for which the energy difference between the dumbbell interstitial and the crowdion is close to the value from ab initio calculation (Ackland et al., J. Phys.: Condens. Matter 2004). At least 30 cascades have been simulated for each condition in order to generate reasonable statistics. The influence of PKA species on damage has been investigated in two ways. In one, the PKA atom was treated as an Fe atom as far as its interaction with other atoms was concerned, but its atomic weight (in amu) was either 12 (C), 56 (Fe) or 209 (Bi). Pairs of Bi PKAs have also been used to mimic heavy molecular ion irradiation. In the other approach, the short-range pair part of the interatomic potential was changed from Fe-Fe to that for Bi-Fe, either with or without a change of PKA mass, in order to study the influence of high-energy collisions on the cascade outcome. It is found that PKA mass is more influential than the interatomic potential between the PKA and Fe atoms. At low cascade energy (5-10 keV), increasing PKA mass leads to a decrease in number of interstitials and vacancies. At high energy (20 keV), the main effect of increasing mass is to increase the probability of creation of interstitial and vacancy clusters in the form of 1/2 and dislocation loops. The simulation results are consistent with experimental TEM observations of damage in irradiated iron. (authors)

  18. Improved mammographic interpretation of masses using computer-aided diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Leichter, I. [Dept. of Electro-Optics, Jerusalem College of Technology (Israel); Fields, S.; Novak, B. [Dept. of Radiology, Hadassah University Hospital, Mt. Scopus Jerusalem (Israel); Nirel, R. [Dept. of Statistics, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem (Israel); Bamberger, P. [Dept. of Electronics, Jerusalem College of Technology, Jerusalem (Israel); Lederman, R. [Department of Radiology, Hadassah University Hospital, Ein Kerem, Jerusalem (Israel); Buchbinder, S. [Department of Radiology, Montefiore Medical Center, University Hospital for the Albert Einstein College of Medicine, Bronx, New York (United States)

    2000-02-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p < 0.005) was found between features extracted by the CAD system for benign and malignant cases. Specificity of the CAD-assisted diagnosis improved significantly (p < 0.02) from 14 % for the conventional assessment to 50 %, and the positive predictive value increased from 0.47 to 0.62 (p < 0.04). The area under the ROC curve (A{sub z}) increased significantly (p < 0.001) from 0.66 for the conventional assessment to 0.81 for the CAD-assisted analysis. The A{sub z} for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that

  19. The exact mass-gaps of the principal chiral models

    CERN Document Server

    Hollowood, Timothy J

    1994-01-01

    An exact expression for the mass-gap, the ratio of the physical particle mass to the $\\Lambda$-parameter, is found for the principal chiral sigma models associated to all the classical Lie algebras. The calculation is based on a comparison of the free-energy in the presence of a source coupling to a conserved charge of the theory computed in two ways: via the thermodynamic Bethe Ansatz from the exact scattering matrix and directly in perturbation theory. The calculation provides a non-trivial test of the form of the exact scattering matrix.

  20. Energy, mass, model-based displays, and memory recall

    International Nuclear Information System (INIS)

    Beltracchi, L.

    1989-01-01

    The operation of a pressurized water reactor in the context of the conservation laws for energy and mass is discussed. These conservation laws are the basis of the Rankine heat engine cycle. Computer graphic implementation of the heat engine cycle, in terms of temperature-entropy coordinates for water, serves as a model-based display of the plant process. A human user of this display, trained in first principles of the process, may exercise a monitoring strategy based on the conservation laws

  1. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  2. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  3. Model for the generation of leptonic mass

    International Nuclear Information System (INIS)

    Fryberger, D.

    1979-01-01

    A self-consistent model for the generation of leptonic mass is developed. In this model it is assumed that bare masses are zero, all of the (charged) leptonic masses being generated by the QED self-interaction. A perturbation expansion for the QED self-mass is formulated, and contact is made between this expansion and the work of Landau and his collaborators. In order to achieve a finite result using this expansion, it is assumed that there is a cutoff at the Landau singularity and that the functional form of the (self-mass) integrand is the same beyond that singularity as it is below. Physical interpretations of these assumptions are discussed. Self-consistency equations are obtained which show that the Landau singularity is in the neighborhood of the Planck mass. This result implies that, as originally suggested by Landau, gravitation may play a role in an ultraviolet cutoff for QED. These equations also yield estimates for the (effective) number of additional pointlike particles that electromagnetically couple to the photon. This latter quantity is consistent with present data from e + e - storage rings

  4. Pseudoscaler meson masses in the quark model

    International Nuclear Information System (INIS)

    Karl, G.

    1976-10-01

    Pseudoscaler meson masses and sum rules are compared in two different limits of a quark model with 4 quarks. The conventional limit corresponds to a heavy c anti c state and generalizes ideal mixing in a nonet. The second limit corresponds to a missing SU 4 unitary singlet and appears more relevant to the masses of π, K, eta, eta'. If SU 3 is broken only by the mass difference between the strange and nonstrange quarks, the physical masses imply that the u anti u, d anti d and s anti s pairs account only for 33% of the composition of the eta'(960), while for the eta(548) this fraction is 86%. If some of the remaining matter is in the form of the constituents of J/psi, the relative proportion of the relative decays J/psi → eta γ vs J/psi → etaγ is accounted for in satisfactory agreement with experiment. (author)

  5. Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.

    1980-01-01

    This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs

  6. Old star clusters: Bench tests of low mass stellar models

    Directory of Open Access Journals (Sweden)

    Salaris M.

    2013-03-01

    Full Text Available Old star clusters in the Milky Way and external galaxies have been (and still are traditionally used to constrain the age of the universe and the timescales of galaxy formation. A parallel avenue of old star cluster research considers these objects as bench tests of low-mass stellar models. This short review will highlight some recent tests of stellar evolution models that make use of photometric and spectroscopic observations of resolved old star clusters. In some cases these tests have pointed to additional physical processes efficient in low-mass stars, that are not routinely included in model computations. Moreover, recent results from the Kepler mission about the old open cluster NGC6791 are adding new tight constraints to the models.

  7. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  8. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  9. Mass renormalization in sine-Gordon model

    International Nuclear Information System (INIS)

    Xu Bowei; Zhang Yumei

    1991-09-01

    With a general gaussian wave functional, we investigate the mass renormalization in the sine-Gordon model. At the phase transition point, the sine-Gordon system tends to a system of massless free bosons which possesses conformal symmetry. (author). 8 refs, 1 fig

  10. and density-dependent quark mass model

    Indian Academy of Sciences (India)

    Since a fair proportion of such dense proto stars are likely to be ... the temperature- and density-dependent quark mass (TDDQM) model which we had em- ployed in .... instead of Tc ~170 MeV which is a favoured value for the ud matter [26].

  11. Computer modeling of the gyrocon

    International Nuclear Information System (INIS)

    Tallerico, P.J.; Rankin, J.E.

    1979-01-01

    A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency

  12. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  13. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  14. Mass generation in perturbed massless integrable models

    International Nuclear Information System (INIS)

    Controzzi, D.; Mussardo, G.

    2005-01-01

    We extend form-factor perturbation theory to non-integrable deformations of massless integrable models, in order to address the problem of mass generation in such systems. With respect to the standard renormalisation group analysis this approach is more suitable for studying the particle content of the perturbed theory. Analogously to the massive case, interesting information can be obtained already at first order, such as the identification of the operators which create a mass gap and those which induce the confinement of the massless particles in the perturbed theory

  15. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  16. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  17. Computing K and D meson masses with N{sub f}=2+1+1 twisted mass lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Baron, Remi [CEA, Centre de Saclay, 91 - Gif-sur-Yvette (France). IRFU/Service de Physique Nucleaire; Blossier, Benoit; Boucaud, Philippe [Paris XI Univ., 91 - Orsay (FR). Lab. de Physique Theorique] (and others)

    2010-05-15

    We discuss the computation of the mass of the K and D mesons within the framework of N{sub f}=2+1+1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and charm quark masses to their physical values. In particular, we address the problems related to the twisted mass flavor and parity symmetry breaking, which arise when considering a non-degenerate (c,s) doublet. We propose and verify the consistency of three methods to extract the K and D meson masses in this framework. (orig.)

  18. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  19. Introduction to computer control and future aspects in thermal ionisation mass spectrometry

    International Nuclear Information System (INIS)

    Hagemann, R.

    The author considers the computer control of the measurement program which is already available in modern mass spectrometers. Future areas for computer control are considered e.g. the heating program, ion optics and focusing, and sample changer control. (Auth.)

  20. Introduction to computer control and future aspects in thermal ionisation mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hagemann, R. [CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1978-12-15

    The author considers the computer control of the measurement program which is already available in modern mass spectrometers. Future areas for computer control are considered e.g. the heating program, ion optics and focusing, and sample changer control.

  1. Computing K and D meson masses with N-f=2+1+1 twisted mass lattice QCD

    NARCIS (Netherlands)

    Baron, Remi; Boucaud, Philippe; Carbonell, Jaume; Drach, Vincent; Farchioni, Federico; Herdoiza, Gregorio; Jansen, Karl; Michael, Chris; Montvay, Istvan; Pallante, Elisabetta; Pene, Olivier; Reker, Siebren; Urbach, Carsten; Wagner, Marc; Wenger, Urs

    We discuss the computation of the mass of the K and D mesons within the framework of N-f = 2 + 1 + 1 twisted mass lattice QCD from a technical point of view. These quantities are essential, already at the level of generating gauge configurations, being obvious candidates to tune the strange and

  2. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  3. Mass functions from the excursion set model

    Science.gov (United States)

    Hiotelis, Nicos; Del Popolo, Antonino

    2017-11-01

    Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.

  4. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty) relationship of extension of one......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...

  5. Electric solar wind sail mass budget model

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2013-02-01

    Full Text Available The electric solar wind sail (E-sail is a new type of propellantless propulsion system for Solar System transportation, which uses the natural solar wind to produce spacecraft propulsion. The E-sail consists of thin centrifugally stretched tethers that are kept charged by an onboard electron gun and, as such, experience Coulomb drag through the high-speed solar wind plasma stream. This paper discusses a mass breakdown and a performance model for an E-sail spacecraft that hosts a mission-specific payload of prescribed mass. In particular, the model is able to estimate the total spacecraft mass and its propulsive acceleration as a function of various design parameters such as the number of tethers and their length. A number of subsystem masses are calculated assuming existing or near-term E-sail technology. In light of the obtained performance estimates, an E-sail represents a promising propulsion system for a variety of transportation needs in the Solar System.

  6. Development and validation of a mass casualty conceptual model.

    Science.gov (United States)

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  7. Climate Ocean Modeling on Parallel Computers

    Science.gov (United States)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  8. Computational Intelligence. Mortality Models for the Actuary

    NARCIS (Netherlands)

    Willemse, W.J.

    2001-01-01

    This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's

  9. Running of radiative neutrino masses: the scotogenic model — revisited

    Energy Technology Data Exchange (ETDEWEB)

    Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)

    2015-11-23

    A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.

  10. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  11. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  12. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  13. Application research of computational mass-transfer differential equation in MBR concentration field simulation.

    Science.gov (United States)

    Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan

    2016-01-01

    After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.

  14. Modelling Mass Casualty Decontamination Systems Informed by Field Exercise Data

    Directory of Open Access Journals (Sweden)

    Richard Amlôt

    2012-10-01

    Full Text Available In the event of a large-scale chemical release in the UK decontamination of ambulant casualties would be undertaken by the Fire and Rescue Service (FRS. The aim of this study was to track the movement of volunteer casualties at two mass decontamination field exercises using passive Radio Frequency Identification tags and detection mats that were placed at pre-defined locations. The exercise data were then used to inform a computer model of the FRS component of the mass decontamination process. Having removed all clothing and having showered, the re-dressing (termed re-robing of casualties was found to be a bottleneck in the mass decontamination process during both exercises. Computer simulations showed that increasing the capacity of each lane of the re-robe section to accommodate 10 rather than five casualties would be optimal in general, but that a capacity of 15 might be required to accommodate vulnerable individuals. If the duration of the shower was decreased from three minutes to one minute then a per lane re-robe capacity of 20 might be necessary to maximise the throughput of casualties. In conclusion, one practical enhancement to the FRS response may be to provide at least one additional re-robe section per mass decontamination unit.

  15. Models of neutrino mass and mixing

    International Nuclear Information System (INIS)

    Ma, Ernest

    2000-01-01

    There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry

  16. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  17. Modelling of heat and mass transfer processes in neonatology

    Energy Technology Data Exchange (ETDEWEB)

    Ginalski, Maciej K [FLUENT Europe, Sheffield Business Park, Europa Link, Sheffield S9 1XU (United Kingdom); Nowak, Andrzej J [Institute of Thermal Technology, Silesian University of Technology, Konarskiego 22, 44-100 Gliwice (Poland); Wrobel, Luiz C [School of Engineering and Design, Brunel University, Uxbridge UB8 3PH (United Kingdom)], E-mail: maciej.ginalski@ansys.com, E-mail: Andrzej.J.Nowak@polsl.pl, E-mail: luiz.wrobel@brunel.ac.uk

    2008-09-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices.

  18. Modelling of heat and mass transfer processes in neonatology

    International Nuclear Information System (INIS)

    Ginalski, Maciej K; Nowak, Andrzej J; Wrobel, Luiz C

    2008-01-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices

  19. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    where x increases from zero to N, the saturation value. Box 1. Matrix Meth- ... such as Laplace transforms and non-linear differential equa- tions with .... atomic bomb project in the. US in the early ... his work on game theory and computers.

  20. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  1. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  2. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  3. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  4. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  5. Computer automation of an accelerator mass spectrometry system

    International Nuclear Information System (INIS)

    Gressett, J.D.; Maxson, D.L.; Matteson, S.; McDaniel, F.D.; Duggan, J.L.; Mackey, H.J.; North Texas State Univ., Denton, TX; Anthony, J.M.

    1989-01-01

    The determination of trace impurities in electronic materials using accelerator mass spectrometry (AMS) requires efficient automation of the beam transport and mass discrimination hardware. The ability to choose between a variety of charge states, isotopes and injected molecules is necessary to provide survey capabilities similar to that available on conventional mass spectrometers. This paper will discuss automation hardware and software for flexible, high-sensitivity trace analysis of electronic materials, e.g. Si, GaAs and HgCdTe. Details regarding settling times will be presented, along with proof-of-principle experimental data. Potential and present applications will also be discussed. (orig.)

  6. Mass and power modeling of communication satellites

    Science.gov (United States)

    Price, Kent M.; Pidgeon, David; Tsao, Alex

    1991-01-01

    Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.

  7. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  8. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  9. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  10. Introduction to models of neutrino masses and mixings

    International Nuclear Information System (INIS)

    Joshipura, Anjan S.

    2004-01-01

    This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)

  11. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  12. Computational challenges in modeling gene regulatory events.

    Science.gov (United States)

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  13. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  14. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  15. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  16. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  17. Material constitutive model for jointed rock mass behavior

    International Nuclear Information System (INIS)

    Thomas, R.K.

    1980-11-01

    A material constitutive model is presented for jointed rock masses which exhibit preferred planes of weakness. This model is intended for use in finite element computations. The immediate application is the thermomechanical modelling of a nuclear waste repository in hard rock, but the model seems appropriate for a variety of other static and dynamic geotechnical problems as well. Starting with the finite element representations of a two-dimensional elastic body, joint planes are introduced in an explicit manner by direct modification of the material stiffness matrix. A novel feature of this approach is that joint set orientations, lengths and spacings are readily assigned through the sampling of a population distribution statistically determined from field measurement data. The result is that the fracture characteristics of the formations have the same statistical distribution in the model as is observed in the field. As a demonstration of the jointed rock mass model, numerical results are presented for the example problem of stress concentration at an underground opening

  18. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  19. Value of radio density determined by enhanced computed tomography for the differential diagnosis of lung masses

    International Nuclear Information System (INIS)

    Xie, Min

    2011-01-01

    Lung masses are often difficult to differentiate when their clinical symptoms and shapes or densities on computed tomography images are similar. However, with different pathological contents, they may appear differently on plain and enhanced computed tomography. Objectives: To determine the value of enhanced computed tomography for the differential diagnosis of lung masses based on the differences in radio density with and without enhancement. Patients and Methods: Thirty-six patients with lung cancer, 36 with pulmonary tuberculosis and 10 with inflammatory lung pseudo tumors diagnosed by computed tomography and confirmed by pathology in our hospital were selected. The mean ±SD radio densities of lung masses in the three groups of patients were calculated based on the results of plain and enhanced computed tomography. Results: There were no significant differences in the radio densities of the masses detected by plain computed tomography among patients with inflammatory lung pseudo tumors, tuberculosis and lung cancer (P> 0.05). However, there were significant differences (P< 0.01)between all the groups in terms of radio densities of masses detected by enhanced computed tomography. Conclusions: The radio densities of lung masses detected by enhanced computed tomography could potentially be used to differentiate between lung cancer, pulmonary tuberculosis and inflammatory lung pseudo tumors.

  20. Neutrino mass models and CP violation

    International Nuclear Information System (INIS)

    Joshipura, Anjan S.

    2011-01-01

    Theoretical ideas on the origin of (a) neutrino masses (b) neutrino mass hierarchies and (c) leptonic mixing angles are reviewed. Topics discussed include (1) symmetries of neutrino mass matrix and their origin (2) ways to understand the observed patterns of leptonic mixing angles and (3)unified description of neutrino masses and mixing angles in grand unified theories.

  1. Vectors into the Future of Mass and Interpersonal Communication Research: Big Data, Social Media, and Computational Social Science.

    Science.gov (United States)

    Cappella, Joseph N

    2017-10-01

    Simultaneous developments in big data, social media, and computational social science have set the stage for how we think about and understand interpersonal and mass communication. This article explores some of the ways that these developments generate 4 hypothetical "vectors" - directions - into the next generation of communication research. These vectors include developments in network analysis, modeling interpersonal and social influence, recommendation systems, and the blurring of distinctions between interpersonal and mass audiences through narrowcasting and broadcasting. The methods and research in these arenas are occurring in areas outside the typical boundaries of the communication discipline but engage classic, substantive questions in mass and interpersonal communication.

  2. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,

  3. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  4. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    particle flow, and numerical techniques allowing the modeling of particles suspended in a fluid. The general concept behind each family of techniques is described. Pros and cons for each technique are given along with examples and references to applications to fresh cementitious materials....

  5. Computed tomographic evaluation of pulmonary mass lesion in chest radiography

    International Nuclear Information System (INIS)

    Choe, Kyu Ok

    1984-01-01

    Until recently, solitary coin lesion of pulmonary disease has been a conspicuous problem in radiologic diagnosis. It is now well informed that CT has offered high resolution with its objective CT numbers to provide additional information in terms of anatomic and pathologic changes. Here by the aid of CT, the authors has retrospectively patients with various shape of round masses thus illustrating the advantage of it over conventional X-ray in diagnosis. 1. Total 53 patients, including 34 males and 19 females, aging between 19 to 76 years old with nodule or mass of any size ranging 1 to 13 cm in diameter were observed. 2. On plain chest X-ray they were identified where 50 patients has single round nodular or mass, only one had two masses which were ecchinococcal cysts, and the rest two had invisible lesions, only detected by CT. 3. With philips tomoscan 310, CT scan was taken 12 mm thicken slice during quiet respiration. Using the ROI cursor the average CT number of the central area was calculated 1.0 cm in side the outer border of the mass. 4. As a consequence of their pathologic features, they were itemized to 4 group as 36 solid, 9 cystic, 4 consolidative and 4 cavitary lesions. 5. Correct diagnosis of 3 cystic lesions, 4 diffuse calcification, 1 A-V malformation were available by CT densitometry. 6. By the aid of better resolution and additional cross-sectional orientation of CT, 3 extrapulmonary lesions, 3 segmental consolidations, 2 bronchocele, and 2 solitary metastasis, were helpful in diagnosis. 7. Also helpful in determining the extent of intrathoracic extent of bronchogenic carcinoma for the same reason but given clues were not more than the ordinary. 8. However, the limitation of the CT densitometry led to miss diagnosis of 3 examples of cystic vs.solid lesions, and CT density of noncalcified granuloma together with bronchogenic carcinoma, did not have a clear cut separation in between.

  6. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  7. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  8. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  9. Computational modeling of epiphany learning.

    Science.gov (United States)

    Chen, Wei James; Krajbich, Ian

    2017-05-02

    Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

  10. Three-dimensional two-phase mass transport model for direct methanol fuel cells

    International Nuclear Information System (INIS)

    Yang, W.W.; Zhao, T.S.; Xu, C.

    2007-01-01

    A three-dimensional (3D) steady-state model for liquid feed direct methanol fuel cells (DMFC) is presented in this paper. This 3D mass transport model is formed by integrating five sub-models, including a modified drift-flux model for the anode flow field, a two-phase mass transport model for the porous anode, a single-phase model for the polymer electrolyte membrane, a two-phase mass transport model for the porous cathode, and a homogeneous mist-flow model for the cathode flow field. The two-phase mass transport models take account the effect of non-equilibrium evaporation/ condensation at the gas-liquid interface. A 3D computer code is then developed based on the integrated model. After being validated against the experimental data reported in the literature, the code was used to investigate numerically transport behaviors at the DMFC anode and their effects on cell performance

  11. Computer automated mass spectrometer for isotope analysis on gas samples

    International Nuclear Information System (INIS)

    Pamula, A.; Kaucsar, M.; Fatu, C.; Ursu, D.; Vonica, D.; Bendea, D.; Muntean, F.

    1998-01-01

    A low resolution, high precision instrument was designed and realized in the mass spectrometry laboratory of the Institute of Isotopic and Molecular Technology, Cluj-Napoca. The paper presents the vacuum system, the sample inlet system, the ion source, the magnetic analyzer and the ion collector. The instrument is almost completely automated. There are described the analog-to-digital conversion circuits, the local control microcomputer, the automation systems and the performance checking. (authors)

  12. Computer Aided Detection of Breast Masses in Digital Tomosynthesis

    Science.gov (United States)

    2008-06-01

    of unknown pathology , all other ROIs generated from that specific subject’s reconstructed volumes were excluded from the KB. For scheme B, all the FPs...query ROI of unknown pathology , all other ROIs generated from that specific subject’s reconstructed volumes were excluded from the KB. For scheme B...Qian, L. Li, and L.P. Clarke, "Image feature extraction for mass detection in digital mammography: Influence of wavelet analysis." Med. Phys. 26

  13. Computed tomography on renal masses in dogs and cats

    International Nuclear Information System (INIS)

    Yamazoe, Kazuaki; Ohashi, Fumihito; Kadosawa, Tsuyoshi; Nishimura, Ryohei; Sasaki, Nobuo; Takeuchi, Akira.

    1994-01-01

    Computed tomography (CT) was performed on renal tumors (Wilms' tumor and renal cell carcinoma) and renal cysts in dogs and cats. CT images in renal tumors were well correlated with macroscopic findings, and contrast CT images were quite useful in differentiating tumoral regions from non-tumoral ones. On renal cysts, intravenous pyelography and ultrasonography were as effective as CT images in morphological diagnosis, but CT was considered to be superior for evaluating three-dimensional (3-D) relationships in complicated lesions. (author)

  14. Computational models of airway branching morphogenesis.

    Science.gov (United States)

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  16. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  17. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  18. Generating Computational Models for Serious Gaming

    NARCIS (Netherlands)

    Westera, Wim

    2018-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  19. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  20. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  1. Ewe: a computer model for ultrasonic inspection

    International Nuclear Information System (INIS)

    Douglas, S.R.; Chaplin, K.R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues

  2. Light reflection models for computer graphics.

    Science.gov (United States)

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  3. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  4. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  5. Computed tomography on renal masses in dogs and cats

    Energy Technology Data Exchange (ETDEWEB)

    Yamazoe, Kazuaki (Gifu Univ. (Japan). Faculty of Agriculture); Ohashi, Fumihito; Kadosawa, Tsuyoshi; Nishimura, Ryohei; Sasaki, Nobuo; Takeuchi, Akira

    1994-08-01

    Computed tomography (CT) was performed on renal tumors (Wilms' tumor and renal cell carcinoma) and renal cysts in dogs and cats. CT images in renal tumors were well correlated with macroscopic findings, and contrast CT images were quite useful in differentiating tumoral regions from non-tumoral ones. On renal cysts, intravenous pyelography and ultrasonography were as effective as CT images in morphological diagnosis, but CT was considered to be superior for evaluating three-dimensional (3-D) relationships in complicated lesions. (author).

  6. Quantum Vertex Model for Reversible Classical Computing

    Science.gov (United States)

    Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng

    We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.

  7. Quantitative computed tomography in measurement of vertebral trabecular bone mass

    International Nuclear Information System (INIS)

    Nilsson, M.; Johnell, O.; Jonsson, K.; Redlund-Johnell, I.

    1988-01-01

    Measurement of bone mineral concentration (BMC) can be done by several modalities. Quantitative computed tomography (QCT) can be used for measurements at different sites and with different types of bone (trabecular-cortical). This study presents a modified method reducing the influence of fat. Determination of BMC was made from measurements with single-energy computed tomography (CT) of the mean Hounsfield number in the trabecular part of the L1 vertebra. The method takes into account the age-dependent composition of the trabecular part of the vertebra. As the amount of intravertebral fat increases with age, the effective atomic number for these parts decreases. This results in a non-linear calibration curve for single-energy CT. Comparison of BMC values using the non-linear calibration curve or the traditional linear calibration with those obtained with a pixel-by-pixel based electron density calculation method (theoretically better) showed results clearly in favor of the non-linear method. The material consisted of 327 patients aged 6 to 91 years, of whom 197 were considered normal. The normal data show a sharp decrease in trabecular bone after the age of 50 in women. In men a slower decrease was found. The vertebrae were larger in men than in women. (orig.)

  8. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  9. Limit on mass differences in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.

    1977-01-01

    Within the Weinberg model mass differences between members of a multiplet generate further mass differences between the neutral and charged vector bosons. The experimental situation on the Weinberg model leads to an upper limit of about 800 GeV on mass differences within a multiplet. No limit on the

  10. A Literature Review of Computers and Pedagogy for Journalism and Mass Communication Education.

    Science.gov (United States)

    Hoag, Anne M.; Bhattacharya, Sandhya; Helsel, Jeffrey; Hu, Yifeng; Lee, Sangki; Kim, Jinhee; Kim, Sunghae; Michael, Patty Wharton; Park, Chongdae; Sager, Sheila S.; Seo, Sangho; Stark, Craig; Yeo, Benjamin

    2003-01-01

    Notes that a growing body of scholarship on computers and pedagogy encompasses a broad range of topics. Focuses on research judged to have implications within journalism and mass communication education. Discusses literature which considers computer use in course design and teaching, student attributes in a digital learning context, the role of…

  11. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  12. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  13. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  14. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  15. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  16. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  17. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  18. On the origin of mass in the standard model

    International Nuclear Information System (INIS)

    Sundman, S.

    2013-01-01

    A model is proposed in which the presently existing elementary particles are the result of an evolution proceeding from the simplest possible particle state to successively more complex states via a series of symmetry-breaking transitions. The properties of two fossil particles — the tauon and muon — together with the observed photon–baryon number ratio provide information that makes it possible to track the early development of particles. A computer simulation of the evolution reveals details about the purpose and history of all presently known elementary particles. In particular, it is concluded that the heavy Higgs particle that generates the bulk of the mass of the Z and W bosons also comes in a light version, which generates small mass contributions to the charged leptons. The predicted mass of this 'flyweight' Higgs boson is 0.505 MeV/c 2 , 106.086 eV/c 2 or 12.0007 μeV/c 2 (corresponding to a photon of frequency 2.9018 GHz) depending on whether it is associated with the tauon, muon or electron. Support for the conclusion comes from the Brookhaven muon g-2 experiment, which indicates the existence of a Higgs particle lighter than the muon. (author)

  19. Improved metastability bounds on the standard model Higgs mass

    CERN Document Server

    Espinosa, J R; Espinosa, J R; Quiros, M

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the Standard Model at finite (and zero) temperature can have a deep and unphysical stable minimum \\langle \\phi(T)\\rangle at values of the field much larger than G_F^{-1/2}. We have computed absolute lower bounds on M_H, as a function of M_t, imposing the condition of no decay by thermal fluctuations, or quantum tunnelling, to the stable minimum. Our effective potential at zero temperature includes all next-to-leading logarithmic corrections (making it extremely scale-independent), and we have used pole masses for the Higgs-boson and top-quark. Thermal corrections to the effective potential include plasma effects by one-loop ring resummation of Debye masses. All calculations, including the effective potential and the bubble nucleation rate, are performed numerically and so the results do not rely on any kind of analytical approximation. Easy-to-use fits are provided for the benefit of the reader. Conclusions on the possi...

  20. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  1. Third generation masses from a two Higgs model fixed point

    International Nuclear Information System (INIS)

    Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.

    1990-01-01

    The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)

  2. A quasi-particle model for computational nuclei

    International Nuclear Information System (INIS)

    Boal, D.H.; Glosli, J.N.

    1988-03-01

    A model Hamiltonian is derived which provides a computationally efficient means of representing nuclei. The Hamiltonian includes both coulomb and isospin dependent terms, and incorporates antisymmetrization effects through a momentum dependent potential. Unlike many other classical or semiclassical models, the nuclei of this simulation have a well-defined ground state with a a non-vanishing 2 >. It is shown that the binding energies per nucleon and r.m.s. radii of these ground states are close to the measured values over a wide mass range

  3. Modeling and Simulation of Variable Mass, Flexible Structures

    Science.gov (United States)

    Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.

    2009-01-01

    distribution of mass in the fuel tank or Solid Rocket Booster (SRB) case for various propellant levels. Based on the mass consumed by the liquid engine or SRB, the appropriate propellant model is coupled with the dry structure model for the stage. Then using vehicle configuration data, the integrated vehicle model is assembled and operated on by the constant system shape functions. The system mode shapes and frequencies can then be computed from the resulting generalized mass and stiffness matrices for that mass configuration. The rigid body mass properties of the vehicle are derived from the integrated vehicle model. The coupling terms between the vehicle rigid body motion and elastic deformation are also updated from the constant system shape functions and the integrated vehicle model. This approach was first used to analyze variable mass spinning beams and then prototyped into a generic dynamics simulation engine. The resulting code was tested against Crew Launch Vehicle (CLV-)class problems worked in the TREETOPS simulation package and by Wilson [2]. The Ares I System Integration Laboratory (SIL) is currently being developed at the Marshall Space Flight Center (MSFC) to test vehicle avionics hardware and software in a hardware-in-the-loop (HWIL) environment and certify that the integrated system is prepared for flight. The Ares I SIL utilizes the Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) tool to simulate the launch vehicle and stimulate avionics hardware. Due to the presence of vehicle control system filters and the thrust oscillation suppression system, which are tuned to the structural characteristics of the vehicle, ARTEMIS must incorporate accurate structural models of the Ares I launch vehicle. The ARTEMIS core dynamics simulation models the highly coupled nature of the vehicle flexible body dynamics, propellant slosh, and vehicle nozzle inertia effects combined with mass and flexible body properties that vary significant with time

  4. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  5. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  6. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  7. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  8. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  9. Development of a miniaturized mass-flow meter for an axial flow blood pump based on computational analysis.

    Science.gov (United States)

    Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-09-01

    In order to monitor the condition of patients with implantable left ventricular assist systems (LVAS), it is important to measure pump flow rate continuously and noninvasively. However, it is difficult to measure the pump flow rate, especially in an implantable axial flow blood pump, because the power consumption has neither linearity nor uniqueness with regard to the pump flow rate. In this study, a miniaturized mass-flow meter for discharged patients with an implantable axial blood pump was developed on the basis of computational analysis, and was evaluated in in-vitro tests. The mass-flow meter makes use of centrifugal force produced by the mass-flow rate around a curved cannula. An optimized design was investigated by use of computational fluid dynamics (CFD) analysis. On the basis of the computational analysis, a miniaturized mass-flow meter made of titanium alloy was developed. A strain gauge was adopted as a sensor element. The first strain gauge, attached to the curved area, measured both static pressure and centrifugal force. The second strain gauge, attached to the straight area, measured static pressure. By subtracting the output of the second strain gauge from the output of the first strain gauge, the mass-flow rate was determined. In in-vitro tests using a model circulation loop, the mass-flow meter was compared with a conventional flow meter. Measurement error was less than ±0.5 L/min and average time delay was 0.14 s. We confirmed that the miniaturized mass-flow meter could accurately measure the mass-flow rate continuously and noninvasively.

  10. Modelling baryonic effects on galaxy cluster mass profiles

    Science.gov (United States)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-06-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  11. Modelling Baryonic Effects on Galaxy Cluster Mass Profiles

    Science.gov (United States)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-03-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  12. Computer Modelling of Photochemical Smog Formation

    Science.gov (United States)

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  13. A Computational Model of Fraction Arithmetic

    Science.gov (United States)

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  14. Model Checking - Automated Verification of Computational Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  16. Computer Modeling of Platinum Reforming Reactors | Momoh ...

    African Journals Online (AJOL)

    This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...

  17. Particle modeling of plasmas computational plasma physics

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1991-01-01

    Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world

  18. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  19. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  20. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  1. Computational model of cellular metabolic dynamics

    DEFF Research Database (Denmark)

    Li, Yanjun; Solomon, Thomas; Haus, Jacob M

    2010-01-01

    of the cytosol and mitochondria. The model simulated skeletal muscle metabolic responses to insulin corresponding to human hyperinsulinemic-euglycemic clamp studies. Insulin-mediated rate of glucose disposal was the primary model input. For model validation, simulations were compared with experimental data......: intracellular metabolite concentrations and patterns of glucose disposal. Model variations were simulated to investigate three alternative mechanisms to explain insulin enhancements: Model 1 (M.1), simple mass action; M.2, insulin-mediated activation of key metabolic enzymes (i.e., hexokinase, glycogen synthase......, by application of mechanism M.3, the model predicts metabolite concentration changes and glucose partitioning patterns consistent with experimental data. The reaction rate fluxes quantified by this detailed model of insulin/glucose metabolism provide information that can be used to evaluate the development...

  2. Fuzzy cluster quantitative computations of component mass transfer in rocks or minerals

    International Nuclear Information System (INIS)

    Liu Dezheng

    2000-01-01

    The author advances a new component mass transfer quantitative computation method on the basis of closure nature of mass percentage of components in rocks or minerals. Using fuzzy dynamic cluster analysis, and calculating restore closure difference, and determining type of difference, and assisted by relevant diagnostic parameters, the method gradually screens out the true constant component. Then, true mass percentage and mass transfer quantity of components of metabolic rocks or minerals are calculated by applying the true constant component fixed coefficient. This method is called true constant component fixed method (TCF method)

  3. A New Methodology for Fuel Mass Computation of an operating Aircraft

    Directory of Open Access Journals (Sweden)

    M Souli

    2016-03-01

    Full Text Available The paper performs a new computational methodology for an accurate computation of fuel mass inside an aircraft wing during the flight. The computation is carried out using hydrodynamic equations, classically known as Navier-Stokes equations by the CFD community. For this purpose, a computational software is developed, the software computes the fuel mass inside the tank based on experimental data of pressure gages that are inserted in the fuel tank. Actually and for safety reasons, Optical fiber sensor for fluid level sensor detection is used. The optical system consists to an optically controlled acoustic transceiver system which measures the fuel level inside the each compartment of the fuel tank. The system computes fuel volume inside the tank and needs density to compute the total fuel mass. Using optical sensor technique, density measurement inside the tank is required. The method developed in the paper, requires pressure measurements in each tank compartment, the density is then computed based on pressure measurements and hydrostatic assumptions. The methodology is tested using a fuel tank provided by Airbus for time history refueling process.

  4. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  5. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  6. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  7. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...

  8. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  9. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  10. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  11. The 1992 FRDM mass model and unstable nuclei

    International Nuclear Information System (INIS)

    Moeller, P.

    1994-01-01

    We discuss the reliability of a recent global nuclear-structure calculation in regions far from β stability. We focus on the results for nuclear masses, but also mention other results obtained in the nuclear-structure calculation, for example ground-state spins. We discuss what should be some minimal requirements of a nuclear mass model and study how the macroscopic-microscopic method and other nuclear mass models fullfil such basic requirements. We study in particular the reliability of nuclear mass models in regions of nuclei that were not considered in the determination of the model parameters

  12. Peculiarities of constructing the models of mass religious communication

    Directory of Open Access Journals (Sweden)

    Petrushkevych Maria Stefanivna

    2017-07-01

    Full Text Available Religious communication is a full-fledged, effective part of the mass information field. It uses new media to fulfil its needs. And it also functions in the field of mass culture and the information society. To describe the features of mass religious communication in the article, the author constructs a graphic model of its functioning.

  13. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    Science.gov (United States)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  14. Macroparticle model for longitudinal emittance growth caused by negative mass instability in a proton synchrotron

    CERN Document Server

    MacLachlan, J A

    2004-01-01

    Both theoretical models and beam observations of negative mass instability fall short of a full description of the dynamics and the dynamical effects. Clarification by numerical modeling is now practicable because of the recent proliferation of so-called computing farms. The results of modeling reported in this paper disagree with some predictions based on a long-standing linear perturbation calculation. Validity checks on the macroparticle model are described.

  15. Fermion masses in potential models of chiral symmetry breaking

    International Nuclear Information System (INIS)

    Jaroszewicz, T.

    1983-01-01

    A class of models of spontaneous chiral symmetry breaking is considered, based on the Hamiltonian with an instantaneous potential interaction of fermions. An explicit mass term mΨ-barΨ is included and the physical meaning of the mass parameter is discussed. It is shown that if the Hamiltonian is normal-ordered (i.e. self-energy omitted), then the mass m introduced in the Hamiltonian is not the current mass appearing in the current algebra relations. (author)

  16. Upper Higgs boson mass bounds from a chirally invariant lattice Higgs-Yukawa Model

    Energy Technology Data Exchange (ETDEWEB)

    Gerhold, P. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; John von Neumann-Institut fuer Computing NIC/DESY, Zeuthen (Germany); Jansen, K. [John von Neumann-Institut fuer Computing NIC/DESY, Zeuthen (Germany)

    2010-02-15

    We establish the cutoff-dependent upper Higgs boson mass bound by means of direct lattice computations in the framework of a chirally invariant lattice Higgs-Yukawa model emulating the same chiral Yukawa coupling structure as in the Higgs-fermion sector of the Standard Model. As expected from the triviality picture of the Higgs sector, we observe the upper mass bound to decrease with rising cutoff parameter {lambda}. Moreover, the strength of the fermionic contribution to the upper mass bound is explored by comparing to the corresponding analysis in the pure {phi}{sup 4}-theory. (orig.)

  17. Phantoms and computational models in therapy, diagnosis and protection

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    The development of realistic body phantoms and computational models is strongly dependent on the availability of comprehensive human anatomical data. This information is often missing, incomplete or not easily available. Therefore, emphasis is given in the Report to organ and body masses and geometries. The influence of age, sex and ethnic origins in human anatomy is considered. Suggestions are given on how suitable anatomical data can be either extracted from published information or obtained from measurements on the local population. Existing types of phantoms and computational models used with photons, electrons, protons and neutrons are reviewed in this Report. Specifications of those considered important to the maintenance and development of reliable radiation dosimetry and measurement are given. The information provided includes a description of the phantom or model, together with diagrams or photographs and physical dimensions. The tissues within body sections are identified and the tissue substitutes used or recommended are listed. The uses of the phantom or model in radiation dosimetry and measurement are outlined. The Report deals predominantly with phantom and computational models representing the human anatomy, with a short Section devoted to animal phantoms in radiobiology

  18. MODELS OF NEPTUNE-MASS EXOPLANETS: EMERGENT FLUXES AND ALBEDOS

    International Nuclear Information System (INIS)

    Spiegel, David S.; Burrows, Adam; Ibgui, Laurent; Hubeny, Ivan; Milsom, John A.

    2010-01-01

    There are now many known exoplanets with Msin i within a factor of 2 of Neptune's, including the transiting planets GJ 436b and HAT-P-11b. Planets in this mass range are different from their more massive cousins in several ways that are relevant to their radiative properties and thermal structures. By analogy with Neptune and Uranus, they are likely to have metal abundances that are an order of magnitude or more greater than those of larger, more massive planets. This increases their opacity, decreases Rayleigh scattering, and changes their equation of state. Furthermore, their smaller radii mean that fluxes from these planets are roughly an order of magnitude lower than those of otherwise identical gas giant planets. Here, we compute a range of plausible radiative equilibrium models of GJ 436b and HAT-P-11b. In addition, we explore the dependence of generic Neptune-mass planets on a range of physical properties, including their distance from their host stars, their metallicity, the spectral type of their stars, the redistribution of heat in their atmospheres, and the possible presence of additional optical opacity in their upper atmospheres.

  19. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  20. Unification of gauge couplings in radiative neutrino mass models

    DEFF Research Database (Denmark)

    Hagedorn, Claudia; Ohlsson, Tommy; Riad, Stella

    2016-01-01

    masses at one-loop level and (III) models with particles in the adjoint representation of SU(3). In class (I), gauge couplings unify in a few models and adding dark matter amplifies the chances for unification. In class (II), about a quarter of the models admits gauge coupling unification. In class (III......We investigate the possibility of gauge coupling unification in various radiative neutrino mass models, which generate neutrino masses at one- and/or two-loop level. Renormalization group running of gauge couplings is performed analytically and numerically at one- and two-loop order, respectively....... We study three representative classes of radiative neutrino mass models: (I) minimal ultraviolet completions of the dimension-7 ΔL = 2 operators which generate neutrino masses at one- and/or two-loop level without and with dark matter candidates, (II) models with dark matter which lead to neutrino...

  1. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  2. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  3. Heavy quark effective theory computation of the mass of the bottom quark

    International Nuclear Information System (INIS)

    Della Morte, M.; Papinutto, M.

    2006-10-01

    We present a fully non-perturbative computation of the mass of the b-quark in the quenched approximation. Our strategy starts from the matching of HQET to QCD in a finite volume and finally relates the quark mass to the spin averaged mass of the B s meson in HQET. All steps include the terms of order Λ 2 /m b . We discuss the computation and renormalization of correlation functions at order 1/m b . With the strange quark mass fixed from the Kaon mass and the QCD scale set through r 0 =0.5 fm, we obtain a renormalization group invariant mass M b =6.758(86) GeV or anti m b (anti m b )=4.347(48) GeV in the MS scheme. The uncertainty in the computed Λ 2 /m b terms contributes little to the total error and Λ 3 /m 2 b terms are negligible. The strategy is promising for full QCD as well as for other B-physics observables. (orig.)

  4. Heavy quark effective theory computation of the mass of the bottom quark

    Energy Technology Data Exchange (ETDEWEB)

    Della Morte, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Garron, N.; Sommer, R. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Papinutto, M. [INFN Sezione di Roma Tre, Rome (Italy)

    2006-10-15

    We present a fully non-perturbative computation of the mass of the b-quark in the quenched approximation. Our strategy starts from the matching of HQET to QCD in a finite volume and finally relates the quark mass to the spin averaged mass of the B{sub s} meson in HQET. All steps include the terms of order {lambda}{sup 2}/m{sub b}. We discuss the computation and renormalization of correlation functions at order 1/m{sub b}. With the strange quark mass fixed from the Kaon mass and the QCD scale set through r{sub 0}=0.5 fm, we obtain a renormalization group invariant mass M{sub b}=6.758(86) GeV or anti m{sub b}(anti m{sub b})=4.347(48) GeV in the MS scheme. The uncertainty in the computed {lambda}{sup 2}/m{sub b} terms contributes little to the total error and {lambda}{sup 3}/m{sup 2}{sub b} terms are negligible. The strategy is promising for full QCD as well as for other B-physics observables. (orig.)

  5. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  6. Computational fluid dynamics modelling in cardiovascular medicine.

    Science.gov (United States)

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission

  7. Models of neutrino masses and baryogenesis

    Indian Academy of Sciences (India)

    Majorana masses of the neutrino implies lepton number violation and is intimately related to the lepton asymmetry of the universe, which gets related to the baryon asymmetry of the universe in the presence of the sphalerons during the electroweak phase transition. Assuming that the baryon asymmetry of the universe is ...

  8. Masses of particles in the SO(18) grand unified model

    International Nuclear Information System (INIS)

    Asatryan, G.M.

    1984-01-01

    The grand unified model based on the orthogonal group SO(18) is treated. The model involves four familiar and four mirror families of fermions. Arising of masses of familiar and mirror particles is studied. The mass of the right-handed Wsub(R) boson interacting via right-handed current way is estimated

  9. A Coupled Chemical and Mass Transport Model for Concrete Durability

    DEFF Research Database (Denmark)

    Jensen, Mads Mønster; Johannesson, Björn; Geiker, Mette Rica

    2012-01-01

    In this paper a general continuum theory is used to evaluate the service life of cement based materials, in terms of mass transport processes and chemical degradation of the solid matrix. The model established is a reactive mass transport model, based on an extended version of the Poisson-Nernst-...

  10. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  11. The deterministic computational modelling of radioactivity

    International Nuclear Information System (INIS)

    Damasceno, Ralf M.; Barros, Ricardo C.

    2009-01-01

    This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models

  12. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  13. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  14. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  15. Computational model of collagen turnover in carotid arteries during hypertension.

    Science.gov (United States)

    Sáez, P; Peña, E; Tarbell, J M; Martínez, M A

    2015-02-01

    It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  17. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  18. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    Science.gov (United States)

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  19. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  20. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  1. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  2. Computer models for optimizing radiation therapy

    International Nuclear Information System (INIS)

    Duechting, W.

    1998-01-01

    The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de

  3. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  4. Mathematical modeling and computational prediction of cancer drug resistance.

    Science.gov (United States)

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of

  5. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  6. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    Science.gov (United States)

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  7. State of the art of numerical modeling of thermohydrologic flow in fractured rock mass

    International Nuclear Information System (INIS)

    Wang, J.S.Y.; Tsang, C.F.; Sterbentz, R.A.

    1983-01-01

    The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses is reviewed and a comparative study is made of several models which have been developed in nuclear waste isolation, geothermal energy, ground-water hydrology, petroleum engineering, and other geologic fields. The general review is followed by separate summaries of the main characteristics of the governing equations, numerical solutions, computer codes, validations, and applications for each model

  8. The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses

    International Nuclear Information System (INIS)

    Wang, J.S.Y.; Sterbentz, R.A.; Tsang, C.F.

    1982-01-01

    The state of the art of numerical modeling of thermohydrologic flow in fractured rock masses is reviewed and a comparative study is made of several models which have been developed in nuclear waste isolation, geothermal energy, ground water hydrology, petroleum engineering, and other geologic fields. The general review is followed by individual summaries of each model and the main characteristics of its governing equations, numerical solutions, computer codes, validations, and applications

  9. The simultaneous mass and energy evaporation (SM2E) model.

    Science.gov (United States)

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  10. Climate Change Discourse in Mass Media: Application of Computer-Assisted Content Analysis

    Science.gov (United States)

    Kirilenko, Andrei P.; Stepchenkova, Svetlana O.

    2012-01-01

    Content analysis of mass media publications has become a major scientific method used to analyze public discourse on climate change. We propose a computer-assisted content analysis method to extract prevalent themes and analyze discourse changes over an extended period in an objective and quantifiable manner. The method includes the following: (1)…

  11. Models of neutrino masses: Anarchy versus hierarchy

    International Nuclear Information System (INIS)

    Altarelli, Guido; Feruglio, Ferruccio; Masina, Isabella

    2003-01-01

    We present a quantitative study of the ability of models with different levels of hierarchy to reproduce the solar neutrino solutions, in particular the LA solution. As a flexible testing ground we consider models based on SU(5)xU(1) F . In this context, we have made statistical simulations of models with different patterns from anarchy to various types of hierarchy: normal hierarchical models with and without automatic suppression of the 23 (sub)determinant and inverse hierarchy models. We find that, not only for the LOW or VO solutions, but even in the LA case, the hierarchical models have a significantly better success rate than those based on anarchy. The normal hierarchy and the inverse hierarchy models have comparable performances in models with see-saw dominance, while the inverse hierarchy models are particularly good in the no see-saw versions. As a possible distinction between these categories of models, the inverse hierarchy models favour a maximal solar mixing angle and their rate of success drops dramatically as the mixing angle decreases, while normal hierarchy models are far more stable in this respect. (author)

  12. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  13. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  14. Computer models in the design of FXR

    International Nuclear Information System (INIS)

    Vogtlin, G.; Kuenning, R.

    1980-01-01

    Lawrence Livermore National Laboratory is developing a 15 to 20 MeV electron accelerator with a beam current goal of 4 kA. This accelerator will be used for flash radiography and has a requirement of high reliability. Components being developed include spark gaps, Marx generators, water Blumleins and oil insulation systems. A SCEPTRE model was developed that takes into consideration the non-linearity of the ferrite and the time dependency of the emission from a field emitter cathode. This model was used to predict an optimum charge time to obtain maximum magnetic flux change from the ferrite. This model and its application will be discussed. JASON was used extensively to determine optimum locations and shapes of supports and insulators. It was also used to determine stress within bubbles adjacent to walls in oil. Computer results will be shown and bubble breakdown will be related to bubble size

  15. Computational modeling of a forward lunge

    DEFF Research Database (Denmark)

    Alkjær, Tine; Wieland, Maja Rose; Andersen, Michael Skipper

    2012-01-01

    during forward lunging. Thus, the purpose of the present study was to establish a musculoskeletal model of the forward lunge to computationally investigate the complete mechanical force equilibrium of the tibia during the movement to examine the loading pattern of the cruciate ligaments. A healthy female...... was selected from a group of healthy subjects who all performed a forward lunge on a force platform, targeting a knee flexion angle of 90°. Skin-markers were placed on anatomical landmarks on the subject and the movement was recorded by five video cameras. The three-dimensional kinematic data describing...... the forward lunge movement were extracted and used to develop a biomechanical model of the lunge movement. The model comprised two legs including femur, crus, rigid foot segments and the pelvis. Each leg had 35 independent muscle units, which were recruited according to a minimum fatigue criterion...

  16. Evolution models of helium white dwarf--main-sequence star merger remnants: the mass distribution of single low-mass white dwarfs

    OpenAIRE

    Zhang, Xianfei; Hall, Philip D.; Jeffery, C. Simon; Bi, Shaolan

    2017-01-01

    It is not known how single white dwarfs with masses less than 0.5Msolar -- low-mass white dwarfs -- are formed. One way in which such a white dwarf might be formed is after the merger of a helium-core white dwarf with a main-sequence star that produces a red giant branch star and fails to ignite helium. We use a stellar-evolution code to compute models of the remnants of these mergers and find a relation between the pre-merger masses and the final white dwarf mass. Combining our results with ...

  17. Computational fluid dynamic modelling of cavitation

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  18. A computation method for mass flowrate predictions in critical flows of initially subcooled liquid in long channels

    International Nuclear Information System (INIS)

    Celata, G.P.; D'Annibale, F.; Farello, G.E.

    1985-01-01

    It is suggested a fast and accurate computation method for the prediction of mass flowrate in critical flows initially subcooled liquid from ''long'' discharge channels (high LID values). Starting from a previous very simple correlation proposed by the authors, further improvements in the model enable to widen the method reliability up to initial saturation conditions. A comparison of computed values with 145 experimental data regarding several investigations carried out at the Heat Transfer Laboratory (TERM/ISP, ENEA Casaccia) shows an excellent agreement. The computed data shifting from experimental ones is within ±10% for almost all data, with a slight increase towards low inlet subcoolings. The average error, for all the considered data, is 4,6%

  19. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  20. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  1. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  2. Effects of confinement on rock mass modulus: A synthetic rock mass modelling (SRM study

    Directory of Open Access Journals (Sweden)

    I. Vazaios

    2018-06-01

    Full Text Available The main objective of this paper is to examine the influence of the applied confining stress on the rock mass modulus of moderately jointed rocks (well interlocked undisturbed rock mass with blocks formed by three or less intersecting joints. A synthetic rock mass modelling (SRM approach is employed to determine the mechanical properties of the rock mass. In this approach, the intact body of rock is represented by the discrete element method (DEM-Voronoi grains with the ability of simulating the initiation and propagation of microcracks within the intact part of the model. The geometry of the pre-existing joints is generated by employing discrete fracture network (DFN modelling based on field joint data collected from the Brockville Tunnel using LiDAR scanning. The geometrical characteristics of the simulated joints at a representative sample size are first validated against the field data, and then used to measure the rock quality designation (RQD, joint spacing, areal fracture intensity (P21, and block volumes. These geometrical quantities are used to quantitatively determine a representative range of the geological strength index (GSI. The results show that estimating the GSI using the RQD tends to make a closer estimate of the degree of blockiness that leads to GSI values corresponding to those obtained from direct visual observations of the rock mass conditions in the field. The use of joint spacing and block volume in order to quantify the GSI value range for the studied rock mass suggests a lower range compared to that evaluated in situ. Based on numerical modelling results and laboratory data of rock testing reported in the literature, a semi-empirical equation is proposed that relates the rock mass modulus to confinement as a function of the areal fracture intensity and joint stiffness. Keywords: Synthetic rock mass modelling (SRM, Discrete fracture network (DFN, Rock mass modulus, Geological strength index (GSI, Confinement

  3. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  4. Computational Models Used to Assess US Tobacco Control Policies.

    Science.gov (United States)

    Feirman, Shari P; Glasser, Allison M; Rose, Shyanika; Niaura, Ray; Abrams, David B; Teplitskaya, Lyubov; Villanti, Andrea C

    2017-11-01

    Simulation models can be used to evaluate existing and potential tobacco control interventions, including policies. The purpose of this systematic review was to synthesize evidence from computational models used to project population-level effects of tobacco control interventions. We provide recommendations to strengthen simulation models that evaluate tobacco control interventions. Studies were eligible for review if they employed a computational model to predict the expected effects of a non-clinical US-based tobacco control intervention. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Six primary non-clinical intervention types were examined across the 40 studies: taxation, youth prevention, smoke-free policies, mass media campaigns, marketing/advertising restrictions, and product regulation. Simulation models demonstrated the independent and combined effects of these interventions on decreasing projected future smoking prevalence. Taxation effects were the most robust, as studies examining other interventions exhibited substantial heterogeneity with regard to the outcomes and specific policies examined across models. Models should project the impact of interventions on overall tobacco use, including nicotine delivery product use, to estimate preventable health and cost-saving outcomes. Model validation, transparency, more sophisticated models, and modeling policy interactions are also needed to inform policymakers to make decisions that will minimize harm and maximize health. In this systematic review, evidence from multiple studies demonstrated the independent effect of taxation on decreasing future smoking prevalence, and models for other tobacco control interventions showed that these strategies are expected to decrease smoking, benefit population health, and are reasonable to implement from a cost perspective. Our recommendations aim to help policymakers and researchers minimize harm and

  5. Computational model of a whole tree combustor

    Energy Technology Data Exchange (ETDEWEB)

    Bryden, K.M.; Ragland, K.W. [Univ. of Wisconsin, Madison, WI (United States)

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  6. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  7. Advanced computational modeling for in vitro nanomaterial dosimetry.

    Science.gov (United States)

    DeLoid, Glen M; Cohen, Joel M; Pyrgiotakis, Georgios; Pirela, Sandra V; Pal, Anoop; Liu, Jiying; Srebric, Jelena; Demokritou, Philip

    2015-10-24

    Accurate and meaningful dose metrics are a basic requirement for in vitro screening to assess potential health risks of engineered nanomaterials (ENMs). Correctly and consistently quantifying what cells "see," during an in vitro exposure requires standardized preparation of stable ENM suspensions, accurate characterizatoin of agglomerate sizes and effective densities, and predictive modeling of mass transport. Earlier transport models provided a marked improvement over administered concentration or total mass, but included assumptions that could produce sizable inaccuracies, most notably that all particles at the bottom of the well are adsorbed or taken up by cells, which would drive transport downward, resulting in overestimation of deposition. Here we present development, validation and results of two robust computational transport models. Both three-dimensional computational fluid dynamics (CFD) and a newly-developed one-dimensional Distorted Grid (DG) model were used to estimate delivered dose metrics for industry-relevant metal oxide ENMs suspended in culture media. Both models allow simultaneous modeling of full size distributions for polydisperse ENM suspensions, and provide deposition metrics as well as concentration metrics over the extent of the well. The DG model also emulates the biokinetics at the particle-cell interface using a Langmuir isotherm, governed by a user-defined dissociation constant, K(D), and allows modeling of ENM dissolution over time. Dose metrics predicted by the two models were in remarkably close agreement. The DG model was also validated by quantitative analysis of flash-frozen, cryosectioned columns of ENM suspensions. Results of simulations based on agglomerate size distributions differed substantially from those obtained using mean sizes. The effect of cellular adsorption on delivered dose was negligible for K(D) values consistent with non-specific binding (> 1 nM), whereas smaller values (≤ 1 nM) typical of specific high

  8. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  9. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  10. A computer model for one-dimensional mass and energy transport in and around chemically reacting particles, including complex gas-phase chemistry, multicomponent molecular diffusion, surface evaporation, and heterogeneous reaction

    Science.gov (United States)

    Cho, S. Y.; Yetter, R. A.; Dryer, F. L.

    1992-01-01

    Various chemically reacting flow problems highlighting chemical and physical fundamentals rather than flow geometry are presently investigated by means of a comprehensive mathematical model that incorporates multicomponent molecular diffusion, complex chemistry, and heterogeneous processes, in the interest of obtaining sensitivity-related information. The sensitivity equations were decoupled from those of the model, and then integrated one time-step behind the integration of the model equations, and analytical Jacobian matrices were applied to improve the accuracy of sensitivity coefficients that are calculated together with model solutions.

  11. Effects of Playing a Serious Computer Game on Body Mass Index and Nutrition Knowledge in Women.

    Science.gov (United States)

    Shiyko, Mariya; Hallinan, Sean; Seif El-Nasr, Magy; Subramanian, Shree; Castaneda-Sceppa, Carmen

    2016-06-02

    Obesity and weight gain is a critical public health concern. Serious digital games are gaining popularity in the context of health interventions. They use persuasive and fun design features to engage users in health-related behaviors in a non-game context. As a young field, research about effectiveness and acceptability of such games for weight loss is sparse. The goal of this study was to evaluate real-world play patterns of SpaPlay and its impact on body mass index (BMI) and nutritional knowledge. SpaPlay is a computer game designed to help women adopt healthier dietary and exercise behaviors, developed based on Self-Determination theory and the Player Experience of Need Satisfaction (PENS) model. Progress in the game is tied to real-life activities (e.g., eating a healthy snack, taking a flight of stairs). We recruited 47 women to partake in a within-subject 90-day longitudinal study, with assessments taken at baseline, 1-, 2-, and 3- months. Women were on average, 29.8 years old (±7.3), highly educated (80.9% had BA or higher), 39% non-White, baseline BMI 26.98 (±5.6), who reported at least contemplating making changes in their diet and exercise routine based on the Stages of Change Model. We computed 9 indices from game utilization data to evaluate game play. We used general linear models to examine inter-individual differences between levels of play, and multilevel models to assess temporal changes in BMI and nutritional knowledge. Patterns of game play were mixed. Participants who reported being in the preparation or action stages of behavior change exhibited more days of play and more play regularity compared to those who were in the contemplation stage. Additionally, women who reported playing video games 1-2 hours per session demonstrated more sparse game play. Brief activities, such as one-time actions related to physical activity or healthy food, were preferred over activities that require a longer commitment (e.g., taking stairs every day for a week

  12. Getting computer models to communicate; Faire communiquer les modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, Ch. [Electricite de France (EDF), 75 - Paris (France). Dept. Mecanique et Modeles Numeriques; Erhard, P. [Electricite de France (EDF), 75 - Paris (France). Dept. Physique des Reacteurs

    1999-07-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  13. Mass corrections to Green functions in instanton vacuum model

    International Nuclear Information System (INIS)

    Esaibegyan, S.V.; Tamaryan, S.N.

    1987-01-01

    The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs

  14. Systematics of quark mass matrices in the standard electroweak model

    International Nuclear Information System (INIS)

    Frampton, P.H.; Jarlskog, C.; Stockholm Univ.

    1985-01-01

    It is shown that the quark mass matrices in the standard electroweak model satisfy the empirical relation M = M' + O(lambda 2 ), where M(M') refers to the mass matrix of the charge 2/3 (-1/3) quarks normalized to the largest eigenvalue, msub(t) (msub(b)), and lambda = Vsub(us) approx.= 0.22. (orig.)

  15. Evolution, Nucleosynthesis, and Yields of AGB Stars at Different Metallicities. III. Intermediate-mass Models, Revised Low-mass Models, and the ph-FRUITY Interface

    Science.gov (United States)

    Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D.

    2015-08-01

    We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M⊙) at different metallicities (-2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M⊙ ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the 22Ne(α,n)25Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY).

  16. EVOLUTION, NUCLEOSYNTHESIS, AND YIELDS OF AGB STARS AT DIFFERENT METALLICITIES. III. INTERMEDIATE-MASS MODELS, REVISED LOW-MASS MODELS, AND THE pH-FRUITY INTERFACE

    Energy Technology Data Exchange (ETDEWEB)

    Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D. [INAF-Osservatorio Astronomico di Collurania, I-64100 Teramo (Italy)

    2015-08-15

    We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M{sub ⊙}) at different metallicities (−2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M{sub ⊙} ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the {sup 22}Ne(α,n){sup 25}Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY)

  17. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  18. Modeling Reality: How Computers Mirror Life

    International Nuclear Information System (INIS)

    Inoue, J-I

    2005-01-01

    Modeling Reality: How Computers Mirror Life covers a wide range of modern subjects in complex systems, suitable not only for undergraduate students who want to learn about modelling 'reality' by using computer simulations, but also for researchers who want to learn something about subjects outside of their majors and need a simple guide. Readers are not required to have specialized training before they start the book. Each chapter is organized so as to train the reader to grasp the essential idea of simulating phenomena and guide him/her towards more advanced areas. The topics presented in this textbook fall into two categories. The first is at graduate level, namely probability, statistics, information theory, graph theory, and the Turing machine, which are standard topics in the course of information science and information engineering departments. The second addresses more advanced topics, namely cellular automata, deterministic chaos, fractals, game theory, neural networks, and genetic algorithms. Several topics included here (neural networks, game theory, information processing, etc) are now some of the main subjects of statistical mechanics, and many papers related to these interdisciplinary fields are published in Journal of Physics A: Mathematical and General, so readers of this journal will be familiar with the subject areas of this book. However, each area is restricted to an elementary level and if readers wish to know more about the topics they are interested in, they will need more advanced books. For example, on neural networks, the text deals with the back-propagation algorithm for perceptron learning. Nowadays, however, this is a rather old topic, so the reader might well choose, for example, Introduction to the Theory of Neural Computation by J Hertz et al (Perseus books, 1991) or Statistical Physics of Spin Glasses and Information Processing by H Nishimori (Oxford University Press, 2001) for further reading. Nevertheless, this book is worthwhile

  19. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  2. A computational model of motor neuron degeneration.

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  4. Computer-aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches

    International Nuclear Information System (INIS)

    Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.

    2008-01-01

    The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the

  5. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  6. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  7. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  8. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  9. A mass-conserving multiphase lattice Boltzmann model for simulation of multiphase flows

    Science.gov (United States)

    Niu, Xiao-Dong; Li, You; Ma, Yi-Ren; Chen, Mu-Feng; Li, Xiang; Li, Qiao-Zhong

    2018-01-01

    In this study, a mass-conserving multiphase lattice Boltzmann (LB) model is proposed for simulating the multiphase flows. The proposed model developed in the present study is to improve the model of Shao et al. ["Free-energy-based lattice Boltzmann model for simulation of multiphase flows with density contrast," Phys. Rev. E 89, 033309 (2014)] by introducing a mass correction term in the lattice Boltzmann model for the interface. The model of Shao et al. [(the improved Zheng-Shu-Chew (Z-S-C model)] correctly considers the effect of the local density variation in momentum equation and has an obvious improvement over the Zheng-Shu-Chew (Z-S-C) model ["A lattice Boltzmann model for multiphase flows with large density ratio," J. Comput. Phys. 218(1), 353-371 (2006)] in terms of solution accuracy. However, due to the physical diffusion and numerical dissipation, the total mass of each fluid phase cannot be conserved correctly. To solve this problem, a mass correction term, which is similar to the one proposed by Wang et al. ["A mass-conserved diffuse interface method and its application for incompressible multiphase flows with large density ratio," J. Comput. Phys. 290, 336-351 (2015)], is introduced into the lattice Boltzmann equation for the interface to compensate the mass losses or offset the mass increase. Meanwhile, to implement the wetting boundary condition and the contact angle, a geometric formulation and a local force are incorporated into the present mass-conserving LB model. The proposed model is validated by verifying the Laplace law, simulating both one and two aligned droplets splashing onto a liquid film, droplets standing on an ideal wall, droplets with different wettability splashing onto smooth wax, and bubbles rising under buoyancy. Numerical results show that the proposed model can correctly simulate multiphase flows. It was found that the mass is well-conserved in all cases considered by the model developed in the present study. The developed

  10. Thermal modelling of Advanced LIGO test masses

    International Nuclear Information System (INIS)

    Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B

    2017-01-01

    High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)

  11. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  12. Radiative neutrino mass model with degenerate right-handed neutrinos

    International Nuclear Information System (INIS)

    Kashiwase, Shoichi; Suematsu, Daijiro

    2016-01-01

    The radiative neutrino mass model can relate neutrino masses and dark matter at a TeV scale. If we apply this model to thermal leptogenesis, we need to consider resonant leptogenesis at that scale. It requires both finely degenerate masses for the right-handed neutrinos and a tiny neutrino Yukawa coupling. We propose an extension of the model with a U(1) gauge symmetry, in which these conditions are shown to be simultaneously realized through a TeV scale symmetry breaking. Moreover, this extension can bring about a small quartic scalar coupling between the Higgs doublet scalar and an inert doublet scalar which characterizes the radiative neutrino mass generation. It also is the origin of the Z 2 symmetry which guarantees the stability of dark matter. Several assumptions which are independently supposed in the original model are closely connected through this extension. (orig.)

  13. Hubble induced mass after inflation in spectator field models

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Tomohiro [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature and the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.

  14. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  15. A fast mass spring model solver for high-resolution elastic objects

    Science.gov (United States)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  16. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  17. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  18. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    International Nuclear Information System (INIS)

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  19. Mass prophylactic screening of the organized female populaton using the Thermograph-Computer System

    International Nuclear Information System (INIS)

    Vepkhvadze, R.Ya.; Khvedelidze, E.Sh.

    1984-01-01

    Organizational aspects of the Thermograph Computer System usage have been analyzed. It has been shown that results of thermodiagnosis completely coincide with clinical conclusion, but roentrenological method permits to reveal a disease only for 19 patients from 36 ones. It is possible to examine 120 women for the aim of early diagnosis of mammary gland diseases during the day operating hours with the use of the Thermograph Computer System. A movable thermodiagnostic room simultaneoUsly served as an inspection room to discover visual forms of tumor diseases including diseases of cervix uteri and may be used for mass preventive examination of the organized female population

  20. The Gogny-Hartree-Fock-Bogoliubov nuclear-mass model

    Energy Technology Data Exchange (ETDEWEB)

    Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium); Hilaire, S.; Girod, M.; Peru, S. [CEA, DAM, DIF, Arpajon (France)

    2016-07-15

    We present the Gogny-Hartree-Fock-Bogoliubov model which reproduces nuclear masses with an accuracy comparable with the best mass formulas. In contrast to the Skyrme-HFB nuclear-mass models, an explicit and self-consistent account of all the quadrupole correlation energies is included within the 5D collective Hamiltonian approach. The final rms deviation with respect to the 2353 measured masses is 789 keV in the 2012 atomic mass evaluation. In addition, the D1M Gogny force is shown to predict nuclear and neutron matter properties in agreement with microscopic calculations based on realistic two- and three-body forces. The D1M properties and its predictions of various observables are compared with those of D1S and D1N. (orig.)

  1. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  2. Computational modeling of intraocular gas dynamics

    International Nuclear Information System (INIS)

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-01-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF_6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF_6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF_6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF_6 is 1.4 times more than that of using diluted SF_6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. (paper)

  3. Modeling rapidly disseminating infectious disease during mass gatherings

    Directory of Open Access Journals (Sweden)

    Chowell Gerardo

    2012-12-01

    Full Text Available Abstract We discuss models for rapidly disseminating infectious diseases during mass gatherings (MGs, using influenza as a case study. Recent innovations in modeling and forecasting influenza transmission dynamics at local, regional, and global scales have made influenza a particularly attractive model scenario for MG. We discuss the behavioral, medical, and population factors for modeling MG disease transmission, review existing model formulations, and highlight key data and modeling gaps related to modeling MG disease transmission. We argue that the proposed improvements will help integrate infectious-disease models in MG health contingency plans in the near future, echoing modeling efforts that have helped shape influenza pandemic preparedness plans in recent years.

  4. Computational analyses of spectral trees from electrospray multi-stage mass spectrometry to aid metabolite identification.

    Science.gov (United States)

    Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne

    2013-10-31

    Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.

  5. An algebraic model for quark mass matrices with heavy top

    International Nuclear Information System (INIS)

    Krolikowski, W.; Warsaw Univ.

    1991-01-01

    In terms of an intergeneration U(3) algebra, a numerical model is constructed for quark mass matrices, predicting the top-quark mass around 170 GeV and the CP-violating phase around 75 deg. The CKM matrix is nonsymmetric in moduli with |V ub | being very small. All moduli are consistent with their experimental limits. The model is motivated by the author's previous work on three replicas of the Dirac particle, presumably resulting into three generations of leptons and quarks. The paper may be also viewed as an introduction to a new method of intrinsic dynamical description of lepton and quark mass matrices. (author)

  6. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  7. Evolution models of helium white dwarf-main-sequence star merger remnants: the mass distribution of single low-mass white dwarfs

    Science.gov (United States)

    Zhang, Xianfei; Hall, Philip D.; Jeffery, C. Simon; Bi, Shaolan

    2018-02-01

    It is not known how single white dwarfs with masses less than 0.5Msolar -- low-mass white dwarfs -- are formed. One way in which such a white dwarf might be formed is after the merger of a helium-core white dwarf with a main-sequence star that produces a red giant branch star and fails to ignite helium. We use a stellar-evolution code to compute models of the remnants of these mergers and find a relation between the pre-merger masses and the final white dwarf mass. Combining our results with a model population, we predict that the mass distribution of single low-mass white dwarfs formed through this channel spans the range 0.37 to 0.5Msolar and peaks between 0.45 and 0.46Msolar. Helium white dwarf--main-sequence star mergers can also lead to the formation of single helium white dwarfs with masses up to 0.51Msolar. In our model the Galactic formation rate of single low-mass white dwarfs through this channel is about 8.7X10^-3yr^-1. Comparing our models with observations, we find that the majority of single low-mass white dwarfs (<0.5Msolar) are formed from helium white dwarf--main-sequence star mergers, at a rate which is about $2$ per cent of the total white dwarf formation rate.

  8. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  9. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  10. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    Smith, D.H.

    1982-08-01

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  11. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  12. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  13. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  14. Computer-aided classification of breast masses using contrast-enhanced digital mammograms

    Science.gov (United States)

    Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin

    2018-02-01

    By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (pbreast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.

  15. Computer modelling of eddy current probes

    International Nuclear Information System (INIS)

    Sullivan, S.P.

    1992-01-01

    Computer programs have been developed for modelling impedance and transmit-receive eddy current probes in two-dimensional axis-symmetric configurations. These programs, which are based on analytic equations, simulate bobbin probes in infinitely long tubes and surface probes on plates. They calculate probe signal due to uniform variations in conductor thickness, resistivity and permeability. These signals depend on probe design and frequency. A finite element numerical program has been procured to calculate magnetic permeability in non-linear ferromagnetic materials. Permeability values from these calculations can be incorporated into the above analytic programs to predict signals from eddy current probes with permanent magnets in ferromagnetic tubes. These programs were used to test various probe designs for new testing applications. Measurements of magnetic permeability in magnetically biased ferromagnetic materials have been performed by superimposing experimental signals, from special laboratory ET probes, on impedance plane diagrams calculated using these programs. (author). 3 refs., 2 figs

  16. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  17. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  18. Portable, remotely operated, computer-controlled, quadrupole mass spectrometer for field use

    International Nuclear Information System (INIS)

    Friesen, R.D.; Newton, J.C.; Smith, C.F.

    1982-04-01

    A portable, remote-controlled mass spectrometer was required at the Nevada Test Site to analyze prompt post-event gas from the nuclear cavity in support of the underground testing program. A Balzers QMG-511 quadrupole was chosen for its ability to be interfaced to a DEC LSI-11 computer and to withstand the ground movement caused by this field environment. The inlet system valves, the pumps, the pressure and temperature transducers, and the quadrupole mass spectrometer are controlled by a read-only-memory-based DEC LSI-11/2 with a high-speed microwave link to the control point which is typically 30 miles away. The computer at the control point is a DEC LSI-11/23 running the RSX-11 operating system. The instrument was automated as much as possible because the system is run by inexperienced operators at times. The mass spectrometer has been used on an initial field event with excellent performance. The gas analysis system is described, including automation by a novel computer control method which reduces operator errors and allows dynamic access to the system parameters

  19. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  20. Morphodynamic Modeling Using The SToRM Computational System

    Science.gov (United States)

    Simoes, F.

    2016-12-01

    The framework of the work presented here is the open source SToRM (System for Transport and River Modeling) eco-hydraulics modeling system, which is one of the models released with the iRIC hydraulic modeling graphical software package (http://i-ric.org/). SToRM has been applied to the simulation of various complex environmental problems, including natural waterways, steep channels with regime transition, and rapidly varying flood flows with wetting and drying fronts. In its previous version, however, channel bed was treated as static and the ability of simulating sediment transport rates or bed deformation was not included. The work presented here reports SToRM's newly developed extensions to expand the system's capability to calculate morphological changes in alluvial river systems. The sediment transport module of SToRM has been developed based on the general recognition that meaningful advances depend on physically solid formulations and robust and accurate numerical solution methods. The basic concepts of mass and momentum conservation are used, where the feedback mechanisms between the flow of water, the sediment in transport, and the bed changes are directly incorporated in the governing equations used in the mathematical model. This is accomplished via a non-capacity transport formulation based on the work of Cao et al. [Z. Cao et al., "Non-capacity or capacity model for fluvial sediment transport," Water Management, 165(WM4):193-211, 2012], where the governing equations are augmented with source/sink terms due to water-sediment interaction. The same unsteady, shock-capturing numerical schemes originally used in SToRM were adapted to the new physics, using a control volume formulation over unstructured computational grids. The presentation will include a brief overview of these methodologies, and the result of applications of the model to a number of relevant physical test cases with movable bed, where computational results are compared to experimental data.

  1. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  2. RSMASS-D nuclear thermal propulsion and bimodal system mass models

    Science.gov (United States)

    King, Donald B.; Marshall, Albert C.

    1997-01-01

    Two relatively simple models have been developed to estimate reactor, radiation shield, and balance of system masses for a particle bed reactor (PBR) nuclear thermal propulsion concept and a cermet-core power and propulsion (bimodal) concept. The approach was based on the methodology developed for the RSMASS-D models. The RSMASS-D approach for the reactor and shield sub-systems uses a combination of simple equations derived from reactor physics and other fundamental considerations along with tabulations of data from more detailed neutron and gamma transport theory computations. Relatively simple models are used to estimate the masses of other subsystem components of the nuclear propulsion and bimodal systems. Other subsystem components include instrumentation and control (I&C), boom, safety systems, radiator, thermoelectrics, heat pipes, and nozzle. The user of these models can vary basic design parameters within an allowed range to achieve a parameter choice which yields a minimum mass for the operational conditions of interest. Estimated system masses are presented for a range of reactor power levels for propulsion for the PBR propulsion concept and for both electrical power and propulsion for the cermet-core bimodal concept. The estimated reactor system masses agree with mass predictions from detailed calculations with xx percent for both models.

  3. SEMIC: an efficient surface energy and mass balance model applied to the Greenland ice sheet

    Directory of Open Access Journals (Sweden)

    M. Krapp

    2017-07-01

    Full Text Available We present SEMIC, a Surface Energy and Mass balance model of Intermediate Complexity for snow- and ice-covered surfaces such as the Greenland ice sheet. SEMIC is fast enough for glacial cycle applications, making it a suitable replacement for simpler methods such as the positive degree day (PDD method often used in ice sheet modelling. Our model explicitly calculates the main processes involved in the surface energy and mass balance, while maintaining a simple interface and requiring minimal data input to drive it. In this novel approach, we parameterise diurnal temperature variations in order to more realistically capture the daily thaw–freeze cycles that characterise the ice sheet mass balance. We show how to derive optimal model parameters for SEMIC specifically to reproduce surface characteristics and day-to-day variations similar to the regional climate model MAR (Modèle Atmosphérique Régional, version 2 and its incorporated multilayer snowpack model SISVAT (Soil Ice Snow Vegetation Atmosphere Transfer. A validation test shows that SEMIC simulates future changes in surface temperature and surface mass balance in good agreement with the more sophisticated multilayer snowpack model SISVAT included in MAR. With this paper, we present a physically based surface model to the ice sheet modelling community that is general enough to be used with in situ observations, climate model, or reanalysis data, and that is at the same time computationally fast enough for long-term integrations, such as glacial cycles or future climate change scenarios.

  4. Computed tomography-guided percutaneous biopsy of pancreatic masses using pneumodissection

    Directory of Open Access Journals (Sweden)

    Chiang Jeng Tyng

    2013-06-01

    Full Text Available Objective To describe the technique of computed tomography-guided percutaneous biopsy of pancreatic tumors with pneumodissection. Materials and Methods In the period from June 2011 to May 2012, seven computed tomography-guided percutaneous biopsies of pancreatic tumors utilizing pneumodissection were performed in the authors' institution. All the procedures were performed with an automatic biopsy gun and coaxial system with Tru-core needles. The biopsy specimens were histologically assessed. Results In all the cases the pancreatic mass could not be directly approached by computed tomography without passing through major organs and structures. The injection of air allowed the displacement of adjacent structures and creation of a safe coaxial needle pathway toward the lesion. Biopsy was successfully performed in all the cases, yielding appropriate specimens for pathological analysis. Conclusion Pneumodissection is a safe, inexpensive and technically easy approach to perform percutaneous biopsy in selected cases where direct access to the pancreatic tumor is not feasible.

  5. Computed tomography-guided percutaneous biopsy of pancreatic masses using pneumodissection

    International Nuclear Information System (INIS)

    Tyng, Chiang Jeng; Bitencourt, Almir Galvao Vieira; Almeida, Maria Fernanda Arruda; Barbosa, Paula Nicole Vieira; Martins, Eduardo Bruno Lobato; Junior, Joao Paulo Kawaoka Matushita; Chojniak, Rubens; Coimbra, Felipe Jose Fernandez

    2013-01-01

    Objective: to describe the technique of computed tomography-guided percutaneous biopsy of pancreatic tumors with pneumodissection. Materials and methods: in the period from June 2011 to May 2012, seven computed tomography guided percutaneous biopsies of pancreatic tumors utilizing pneumodissection were performed in the authors' institution. All the procedures were performed with an automatic biopsy gun and coaxial system with Tru-core needles. The biopsy specimens were histologically assessed. Results: in all the cases the pancreatic mass could not be directly approached by computed tomography without passing through major organs and structures. The injection of air allowed the displacement of adjacent structures and creation of a safe coaxial needle pathway toward the lesion. Biopsy was successfully performed in all the cases, yielding appropriate specimens for pathological analysis. Conclusion: Pneumodissection is a safe, inexpensive and technically easy approach to perform percutaneous biopsy in selected cases where direct access to the pancreatic tumor is not feasible. (author)

  6. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  7. Dynamical mass generation in the continuum Thirring model

    International Nuclear Information System (INIS)

    Girardello, L.; Immirzi, G.; Rossi, P.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge

    1982-01-01

    We study the renormalization of the Thirring model in the neighbourhood of μ = 0,g = -π/2, and find that on the trajectory which tends to this point when the scale goes to infinity the behaviour of the model reproduces what one obtains decomposing the N = 2 Gross-Neveu model. The existence of this trajectory is consistent with the dynamical mass generation found by McCoy and Wu in the discrete version of the massless model. (orig.)

  8. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  9. The use of conduction model in laser weld profile computation

    Science.gov (United States)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  10. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  11. Computational and Organotypic Modeling of Microcephaly ...

    Science.gov (United States)

    Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computational and experimental models to probe the underlying molecular targets, cellular consequences, and biological processes. We describe an Adverse Outcome Pathway (AOP) framework for microcephaly derived from literature on all gene-, chemical-, or viral- effects and brain development. Overlap with NTDs is likely, although the AOP connections identified here focused on microcephaly as the adverse outcome. A query of the Mammalian Phenotype Browser database for ‘microcephaly’ (MP:0000433) returned 85 gene associations; several function in microtubule assembly and centrosome cycle regulated by (microcephalin, MCPH1), a gene for primary microcephaly in humans. The developing ventricular zone is the likely target. In this zone, neuroprogenitor cells (NPCs) self-replicate during the 1st trimester setting brain size, followed by neural differentiation of the neocortex. Recent studies with human NPCs confirmed infectivity with Zika virions invoking critical cell loss (apoptosis) of precursor NPCs; similar findings have been shown with fetal alcohol or methylmercury exposure in rodent studies, leading to mathematical models of NPC dynamics in size determination of the ventricular zone. A key event

  12. Computer modeling of the Cabriolet Event

    International Nuclear Information System (INIS)

    Kamegai, M.

    1979-01-01

    Computer modeling techniques are described for calculating the results of underground nuclear explosions at depths shallow enough to produce cratering. The techniques are applied to the Cabriolet Event, a well-documented nuclear excavation experiment, and the calculations give good agreement with the experimental results. It is concluded that, given data obtainable by outside observers, these modeling techniques are capable of verifying the yield and depth of underground nuclear cratering explosions, and that they could thus be useful in monitoring another country's compliance with treaty agreements on nuclear testing limitations. Several important facts emerge from the study: (1) seismic energy is produced by only a fraction of the nuclear yield, a fraction depending strongly on the depth of shot and the mechanical properties of the surrounding rock; (2) temperature of the vented gas can be predicted accurately only if good equations of state are available for the rock in the detonation zone; and (3) temperature of the vented gas is strongly dependent on the cooling effect, before venting, of mixing with melted rock in the expanding cavity and, to a lesser extent, on the cooling effect of water in the rock

  13. Random matrix model of adiabatic quantum computing

    International Nuclear Information System (INIS)

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-01-01

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

  14. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  15. Statistical Texture Model for mass Detection in Mammography

    Directory of Open Access Journals (Sweden)

    Nicolás Gallego-Ortiz

    2013-12-01

    Full Text Available In the context of image processing algorithms for mass detection in mammography, texture is a key feature to be used to distinguish abnormal tissue from normal tissue. Recently, a texture model based on a multivariate Gaussian mixture was proposed, of which the parameters are learned in an unsupervised way from the pixel intensities of images. The model produces images that are probabilistic maps of texture normality and it was proposed as a visualization aid for diagnostic by clinical experts. In this paper, the usability of the model is studied for automatic mass detection. A segmentation strategy is proposed and evaluated using 79 mammography cases.

  16. Study on the constitutive model for jointed rock mass.

    Directory of Open Access Journals (Sweden)

    Qiang Xu

    Full Text Available A new elasto-plastic constitutive model for jointed rock mass, which can consider the persistence ratio in different visual angle and anisotropic increase of plastic strain, is proposed. The proposed the yield strength criterion, which is anisotropic, is not only related to friction angle and cohesion of jointed rock masses at the visual angle but also related to the intersection angle between the visual angle and the directions of the principal stresses. Some numerical examples are given to analyze and verify the proposed constitutive model. The results show the proposed constitutive model has high precision to calculate displacement, stress and plastic strain and can be applied in engineering analysis.

  17. Neutrino mass in flavor dependent gauged lepton model

    Science.gov (United States)

    Nomura, Takaaki; Okada, Hiroshi

    2018-03-01

    We study a neutrino model introducing an additional nontrivial gauged lepton symmetry where the neutrino masses are induced at two-loop level, while the first and second charged-leptons of the standard model are done at one-loop level. As a result of the model structure, we can predict one massless active neutrino, and there is a dark matter candidate. Then we discuss the neutrino mass matrix, muon anomalous magnetic moment, lepton flavor violations, oblique parameters, and relic density of dark matter, taking into account the experimental constraints.

  18. Towards dynamic reference information models: Readiness for ICT mass customisation

    NARCIS (Netherlands)

    Verdouw, C.N.; Beulens, A.J.M.; Trienekens, J.H.; Verwaart, D.

    2010-01-01

    Current dynamic demand-driven networks make great demands on, in particular, the interoperability and agility of information systems. This paper investigates how reference information models can be used to meet these demands by enhancing ICT mass customisation. It was found that reference models for

  19. Neutrino Mass Models: impact of non-zero reactor angle

    International Nuclear Information System (INIS)

    King, Stephen F.

    2011-01-01

    In this talk neutrino mass models are reviewed and the impact of a non-zero reactor angle and other deviations from tri-bi maximal mixing are discussed. We propose some benchmark models, where the only way to discriminate between them is by high precision neutrino oscillation experiments.

  20. Constraints on constituent quark masses from potential models

    International Nuclear Information System (INIS)

    Silvestre-Brac, B.

    1998-01-01

    Starting from reasonable hypotheses, the magnetic moments for the baryons are revisited dat the light of general space wave functions. They allow to put very severe bounds on the quark masses as derived from usual potential models. The experimental situation cannot be explained in the framework of such models. (author)

  1. Test of a chromomagnetic model for hadron mass differences

    Science.gov (United States)

    Lichtenberg, D. B.; Roncaglia, R.

    1993-05-01

    An oversimplified model consisting of the QCD color-magnetic interaction has been used previously by Silvestre-Brac and others to compare the masses of exotic and normal hadrons. We show that the model can give qualitatively wrong answers when applied to systems of normal hadrons.

  2. Test of a chromomagnetic model for hadron mass differences

    International Nuclear Information System (INIS)

    Lichtenberg, D.B.; Roncaglia, R.

    1993-01-01

    An oversimplified model consisting of the QCD color-magnetic interaction has been used previously by Silvestre-Brac and others to compare the masses of exotic and normal hadrons. We show that the model can give qualitatively wrong answers when applied to systems of normal hadrons

  3. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  4. A scan for models with realistic fermion mass patterns

    International Nuclear Information System (INIS)

    Bijnens, J.; Wetterich, C.

    1986-03-01

    We consider models which have no small Yukawa couplings unrelated to symmetry. This situation is generic in higher dimensional unification where Yukawa couplings are predicted to have strength similar to the gauge couplings. Generations have then to be differentiated by symmetry properties and the structure of fermion mass matrices is given in terms of quantum numbers alone. We scan possible symmetries leading to realistic mass matrices. (orig.)

  5. Multicomponent mass transport model: a model for simulating migration of radionuclides in ground water

    International Nuclear Information System (INIS)

    Washburn, J.F.; Kaszeta, F.E.; Simmons, C.S.; Cole, C.R.

    1980-07-01

    This report presents the results of the development of a one-dimensional radionuclide transport code, MMT2D (Multicomponent Mass Transport), for the AEGIS Program. Multicomponent Mass Transport is a numerical solution technique that uses the discrete-parcel-random-wald (DPRW) method to directly simulate the migration of radionuclides. MMT1D accounts for: convection;dispersion; sorption-desorption; first-order radioactive decay; and n-membered radioactive decay chains. Comparisons between MMT1D and an analytical solution for a similar problem show that: MMT1D agrees very closely with the analytical solution; MMT1D has no cumulative numerical dispersion like that associated with solution techniques such as finite differences and finite elements; for current AEGIS applications, relatively few parcels are required to produce adequate results; and the power of MMT1D is the flexibility of the code in being able to handle complex problems for which analytical solution cannot be obtained. Multicomponent Mass Transport (MMT1D) codes were developed at Pacific Northwest Laboratory to predict the movement of radiocontaminants in the saturated and unsaturated sediments of the Hanford Site. All MMT models require ground-water flow patterns that have been previously generated by a hydrologic model. This report documents the computer code and operating procedures of a third generation of the MMT series: the MMT differs from previous versions by simulating the mass transport processes in systems with radionuclide decay chains. Although MMT is a one-dimensional code, the user is referred to the documentation of the theoretical and numerical procedures of the three-dimensional MMT-DPRW code for discussion of expediency, verification, and error-sensitivity analysis

  6. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  7. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  8. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  9. A Mass Loss Penetration Model to Investigate the Dynamic Response of a Projectile Penetrating Concrete considering Mass Abrasion

    Directory of Open Access Journals (Sweden)

    NianSong Zhang

    2015-01-01

    Full Text Available A study on the dynamic response of a projectile penetrating concrete is conducted. The evolutional process of projectile mass loss and the effect of mass loss on penetration resistance are investigated using theoretical methods. A projectile penetration model considering projectile mass loss is established in three stages, namely, cratering phase, mass loss penetration phase, and remainder rigid projectile penetration phase.

  10. Editorial: Modelling and computational challenges in granular materials

    OpenAIRE

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss the current progress and latest advancements in the field of advanced numerical methods and modelling of granular materials. The focus will be on computational methods, improved algorithms and the m...

  11. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Development of a totally computer-controlled triple quadrupole mass spectrometer system

    International Nuclear Information System (INIS)

    Wong, C.M.; Crawford, R.W.; Barton, V.C.; Brand, H.R.; Neufeld, K.W.; Bowman, J.E.

    1983-01-01

    A totally computer-controlled triple quadrupole mass spectrometer (TQMS) is described. It has a number of unique features not available on current commercial instruments, including: complete computer control of source and all ion axial potentials; use of dual computers for data acquisition and data processing; and capability for self-adaptive control of experiments. Furthermore, it has been possible to produce this instrument at a cost significantly below that of commercial instruments. This triple quadrupole mass spectrometer has been constructed using components commercially available from several different manufacturers. The source is a standard Hewlett-Packard 5985B GC/MS source. The two quadrupole analyzers and the quadrupole CAD region contain Balzers QMA 150 rods with Balzers QMG 511 rf controllers for the analyzers and a Balzers QHS-511 controller for the CAD region. The pulsed-positive-ion-negative-ion-chemical ionization (PPINICI) detector is made by Finnigan Corporation. The mechanical and electronics design were developed at LLNL for linking these diverse elements into a functional TQMS as described. The computer design for total control of the system is unique in that two separate LSI-11/23 minicomputers and assorted I/O peripherals and interfaces from several manufacturers are used. The evolution of this design concept from totally computer-controlled instrumentation into future self-adaptive or ''expert'' systems for instrumental analysis is described. Operational characteristics of the instrument and initial results from experiments involving the analysis of the high explosive HMX (1,3,5,7-Tetranitro-1,3,5,7-Tetrazacyclooctane) are presented

  13. Dependence of X-Ray Burst Models on Nuclear Masses

    Energy Technology Data Exchange (ETDEWEB)

    Schatz, H.; Ong, W.-J. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States)

    2017-08-01

    X-ray burst model predictions of light curves and the final composition of the nuclear ashes are affected by uncertain nuclear masses. However, not all of these masses are determined experimentally with sufficient accuracy. Here we identify the remaining nuclear mass uncertainties in X-ray burst models using a one-zone model that takes into account the changes in temperature and density evolution caused by changes in the nuclear physics. Two types of bursts are investigated—a typical mixed H/He burst with a limited rapid proton capture process (rp-process) and an extreme mixed H/He burst with an extended rp-process. When allowing for a 3 σ variation, only three remaining nuclear mass uncertainties affect the light-curve predictions of a typical H/He burst ({sup 27}P, {sup 61}Ga, and {sup 65}As), and only three additional masses affect the composition strongly ({sup 80}Zr, {sup 81}Zr, and {sup 82}Nb). A larger number of mass uncertainties remain to be addressed for the extreme H/He burst, with the most important being {sup 58}Zn, {sup 61}Ga, {sup 62}Ge, {sup 65}As, {sup 66}Se, {sup 78}Y, {sup 79}Y, {sup 79}Zr, {sup 80}Zr, {sup 81}Zr, {sup 82}Zr, {sup 82}Nb, {sup 83}Nb, {sup 86}Tc, {sup 91}Rh, {sup 95}Ag, {sup 98}Cd, {sup 99}In, {sup 100}In, and {sup 101}In. The smallest mass uncertainty that still impacts composition significantly when varied by 3 σ is {sup 85}Mo with 16 keV uncertainty. For one of the identified masses, {sup 27}P, we use the isobaric mass multiplet equation to improve the mass uncertainty, obtaining an atomic mass excess of −716(7) keV. The results provide a roadmap for future experiments at advanced rare isotope beam facilities, where all the identified nuclides are expected to be within reach for precision mass measurements.

  14. Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material

    Science.gov (United States)

    Upadhyay, Ashwani; Chandramohan, V. P.

    2018-04-01

    A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.

  15. Mass

    International Nuclear Information System (INIS)

    Quigg, Chris

    2007-01-01

    In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.

  16. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  17. Impact of mass generation for spin-1 mediator simplified models

    International Nuclear Information System (INIS)

    Bell, Nicole F.; Cai, Yi; Leane, Rebecca K.

    2017-01-01

    In the simplified dark matter models commonly studied, the mass generation mechanism for the dark fields is not typically specified. We demonstrate that the dark matter interaction types, and hence the annihilation processes relevant for relic density and indirect detection, are strongly dictated by the mass generation mechanism chosen for the dark sector particles, and the requirement of gauge invariance. We focus on the class of models in which fermionic dark matter couples to a spin-1 vector or axial-vector mediator. However, in order to generate dark sector mass terms, it is necessary in most cases to introduce a dark Higgs field and thus a spin-0 scalar mediator will also be present. In the case that all the dark sector fields gain masses via coupling to a single dark sector Higgs field, it is mandatory that the axial-vector coupling of the spin-1 mediator to the dark matter is non-zero; the vector coupling may also be present depending on the charge assignments. For all other mass generation options, only pure vector couplings between the spin-1 mediator and the dark matter are allowed. If these coupling restrictions are not obeyed, unphysical results may be obtained such as a violation of unitarity at high energies. These two-mediator scenarios lead to important phenomenology that does not arise in single mediator models. We survey two-mediator dark matter models which contain both vector and scalar mediators, and explore their relic density and indirect detection phenomenology.

  18. The analogic model ''RIC'' of thermal behaviour of mass concrete

    International Nuclear Information System (INIS)

    Gonzalez Redondo, M.; Gonzalez de Posada, F.; Plana Claver, J.

    1997-01-01

    In order to study the thermal field and calorific flows in heat sources (i.e. mass concrete during setting) we have conceived, built and experimented with an analogical electric model. This model, named RIC, consists of resistors (R) and capacitors (C) in which nodes an electric current (I) has been injected. Several analogical constants were used for the mathematical approximation. Thus, this paper describes the analogical RIC model, simulating heat generation, boundary and initial conditions and concreting. (Author) 4 refs

  19. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  20. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  1. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  2. PETRI NET MODELING OF COMPUTER VIRUS LIFE CYCLE

    African Journals Online (AJOL)

    Dr Obe

    dynamic system analysis is applied to model the virus life cycle. Simulation of the derived model ... Keywords: Virus lifecycle, Petri nets, modeling. simulation. .... complex process. Figure 2 .... by creating Matlab files for five different computer ...

  3. Bayesian model comparison using Gauss approximation on multicomponent mass spectra from CH4 plasma

    International Nuclear Information System (INIS)

    Kang, H.D.; Dose, V.

    2004-01-01

    We performed Bayesian model comparison on mass spectra from CH4 rf process plasmas to detect radicals produced in the plasma. The key ingredient for its implementation is the high-dimensional evidence integral. We apply Gauss approximation to evaluate the evidence. The results were compared with those calculated by the thermodynamic integration method using Markov Chain Monte Carlo technique. In spite of very large difference in the computation time between two methods a very good agreement was obtained. Alternatively, a Monte Carlo integration method based on the approximated Gaussian posterior density is presented. Its applicability to the problem of mass spectrometry is discussed

  4. The mass spectrum of the Schwinger model with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Cyprus Univ., Nicosia (Cyprus). Dept. of Physics

    2013-07-15

    We show the feasibility of tensor network solutions for lattice gauge theories in Hamiltonian formulation by applying matrix product states algorithms to the Schwinger model with zero and non-vanishing fermion mass. We introduce new techniques to compute excitations in a system with open boundary conditions, and to identify the states corresponding to low momentum and different quantum numbers in the continuum. For the ground state and both the vector and scalar mass gaps in the massive case, the MPS technique attains precisions comparable to the best results available from other techniques.

  5. New limits on the mass of neutral Higgses in general models

    International Nuclear Information System (INIS)

    Comelli, D.

    1996-07-01

    In general electroweak models with weakly coupled (and otherwise arbitrary) Higgs sector there always exists in the spectrum a scalar state with mass controlled by the electroweak scale. A new and simple recipe to compute an analytical tree-level upper bound on the mass of this light scalar is given. We compare this new bound with similar ones existing in the literature and show how to extract extra information on heavier neutral scalars in the spectrum from the interplay of independent bounds. Production of these states at future colliders is addressed and the implications for the decoupling limit in which only one Higgs is expected to remain light are discussed. (orig.)

  6. Regenerating computer model of the thymus

    International Nuclear Information System (INIS)

    Lumb, J.R.

    1975-01-01

    This computer model simulates the cell population kinetics of the development and later degeneration of the thymus. Nutritional factors are taken into account by the growth of blood vessels in the simulated thymus. The stem cell population is kept at its maximum by allowing some stem cells to divide into two stem cells until the population reaches its maximum, thus regenerating the thymus after an insult such as irradiation. After a given number of population doublings the maximum allowed stem cell population is gradually decreased in order to simulate the degeneration of the thymus. Results show that the simulated thymus develops and degenerates in a pattern similar to that of the natural thymus. This simulation is used to evaluate cellular kinetic data for the the thymus. The results from testing the internal consistency of available data are reported. The number of generations which most represents the natural thymus includes seven dividing generations of lymphocytes and one mature, nondividing generation of small lymphocytes. The size of the resulting developed thymus can be controlled without affecting other variables by changing the maximum stem cell population allowed. In addition, recovery from irradiation is simulated

  7. Computational modeling of epidural cortical stimulation

    Science.gov (United States)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2008-12-01

    Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.

  8. Geometric modeling for computer aided design

    Science.gov (United States)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  9. Review of computational thermal-hydraulic modeling

    International Nuclear Information System (INIS)

    Keefer, R.H.; Keeton, L.W.

    1995-01-01

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix

  10. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  11. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling

    Science.gov (United States)

    Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.

    1999-01-01

    Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.

  13. Computational model for transient studies of IRIS pressurizer behavior

    International Nuclear Information System (INIS)

    Rives Sanz, R.; Montesino Otero, M.E.; Gonzalez Mantecon, J.; Rojas Mazaira, L.

    2014-01-01

    International Reactor Innovative and Secure (IRIS) excels other Small Modular Reactor (SMR) designs due to its innovative characteristics regarding safety. IRIS integral pressurizer makes the design of larger pressurizer system than the conventional PWR, without any additional cost. The IRIS pressurizer volume of steam can provide enough margins to avoid spray requirement to mitigate in-surge transient. The aim of the present research is to model the IRIS pressurizer's dynamic using the commercial finite volume Computational Fluid Dynamic code CFX 14. A symmetric tridimensional model equivalent to 1/8 of the total geometry was adopted to reduce mesh size and minimize processing time. The model considers the coexistence of three phases: liquid, steam, and vapor bubbles in liquid volume. Additionally, it takes into account the heat losses between the pressurizer and primary circuit. The relationships for interfacial mass, energy, and momentum transport are programmed and incorporated into CFX by using expressions in CFX Command Language (CCL) format. Moreover, several additional variables are defined for improving the convergence and allow monitoring of boron dilution sequences and condensation-evaporation rate in different control volumes. For transient states a non - equilibrium stratification in the pressurizer is considered. This paper discusses the model developed and the behavior of the system for representative transients sequences such as the in/out-surge transients and boron dilution sequences. The results of analyzed transients of IRIS can be applied to the design of pressurizer internal structures and components. (author)

  14. Turbulence modeling for mass transfer enhancement by separation and reattachment with two-equation eddy-viscosity models

    International Nuclear Information System (INIS)

    Xiong Jinbiao; Koshizuka, Seiichi; Sakai, Mikio

    2011-01-01

    Highlights: → We selected and evaluated five two-equation eddy-viscosity turbulence models for modeling the separated and reattaching flow. → The behavior of the models in the simple flow is not consistent with that in the separated and reattaching flow. → The Abe-Kondoh-Nagano model is the best one among the selected model. → Application of the stress limiter and the Kato-Launder modification in the Abe-Kondoh-Nagano model helps to improve prediction of the peak mass transfer coefficient in the orifice flow. → The value of turbulent Schmidt number is investigated. - Abstract: The prediction of mass transfer rate is one of the key elements for estimation of the flow accelerated corrosion (FAC) rate. Three low Reynolds number (LRN) k-ε models (Lam-Bremhorst (LB), Abe-Kondoh-Nagano (AKN) and Hwang-Lin (HL)), one LRN k-ω (Wilcox, WX) model and the k-ω SST model are tested for the computation of the high Schmidt number mass transfer, especially in the flow through an orifice. The models are tested in the computation of three types of flow: (1) the fully developed pipe flow, (2) the flow over a backward facing step, (3) the flow through an orifice. The HL model shows a good performance in predicting mass transfer in the fully developed pipe flow but fails to give reliable prediction in the flow through an orifice. The WX model and the k-ω SST model underpredict the mass transfer rate in the flow types 1 and 3. The LB model underestimates the mass transfer in the flow type 1, but shows abnormal behavior at the reattaching point in type 3. Synthetically evaluating all the models in all the computed case, the AKN model is the best one; however, the prediction is still not satisfactory. In the evaluation in the flow over a backward facing step shows k-ω SST model shows superior performance. This is interpreted as an implication that the combination of the k-ε model and the stress limiter can improve the model behavior in the recirculation bubble. Both the

  15. Bone mass determination from microradiographs by computer-assisted videodensitometry. Pt. 2

    International Nuclear Information System (INIS)

    Kaelebo, P.; Strid, K.G.

    1988-01-01

    Aluminium was evaluated as a reference substance in the assessment of rabbit cortical bone by microradiography followed by videodensitometry. Ten dense, cortical-bone specimens from the same tibia diaphysis were microradiographed using prefiltered 27 kV roentgen radiation together with aluminium step wedges and bone simulating phantoms for calibration. Optimally exposed and processed plates were analysed by previously described computer-assisted videodensitometry. For comparison, the specimens were analysed by physico-chemical methods. A strict proportionality was found between the 'aluminium equivalent mass' and the ash weight of the specimens. The total random error was low with a coefficient of variation within 1.5 per cent. It was concluded that aluminium is an appropriate reference material in the determination of cortical bone, which it resembles in effective atomic number and thus X-ray attenuation characteristics. The 'aluminium equivalent mass' is suitably established as the standard of expressing the results of bone assessment by microradiography. (orig.)

  16. Computational and experimental study of the effect of mass transfer on liquid jet break-up

    Science.gov (United States)

    Schetz, J. A.; Situ, M.

    1983-06-01

    A computational method has been developed to predict the effect of mass transfer on liquid jet break-up in coaxial, low velocity gas streams. Two conditions, both with and without the effect of mass transfer on the jet break-up, are calculated, and compared with experimental results and the classical linear theory. Methanol and water were used as the injectants. The numerical solution can predict the instantaneous shape of the jet surface and the break-up time, and it is very close to the experimental results. The numerical solutions and the experimental results both indicate that the wave number of the maximum instability is about 6.9, higher than 4.51 which was predicted by Rayleigh's linear theory. The experimental results and numerical solution show that the growth of the amplitude of the trough is faster than the growth of the amplitude of the crest, especially for a rapidly vaporizing jet. The numerical solutions show that for the small rates of evaporation, the effect of the mass transfer on the interface has a stabilizing effect near the wave number for maximum instability. Inversely, it has a destabilizing effect far from the wave number for maximum instability. For rapid evaporation, the effect of the mass transfer always has a destabilizing effect and decreases the break-up time of the jet.

  17. Progresses on the computation of added masses for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1985-01-01

    The problem of coupled vibrations of fluids and structures is analyzed, in the case of irrotational incompressible fluid fields the effect is modelled as an added mass matrix. The Modified Boundary Elements technique is used; a particular case (cylindrical reservois with sloshing) and the general case are examined. (orig.)

  18. Computational Intelligence Agent-Oriented Modelling

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2006-01-01

    Roč. 5, č. 2 (2006), s. 430-433 ISSN 1109-2777 R&D Projects: GA MŠk 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : multi-agent systems * adaptive agents * computational intelligence Subject RIV: IN - Informatics, Computer Science

  19. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  20. Deployment Models: Towards Eliminating Security Concerns From Cloud Computing

    OpenAIRE

    Zhao, Gansen; Chunming, Rong; Jaatun, Martin Gilje; Sandnes, Frode Eika

    2010-01-01

    Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment. D...

  1. Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?

    OpenAIRE

    Bogataj Habjan, Kristina; Pucihar, Andreja

    2017-01-01

    This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...

  2. Computer codes for three dimensional mass transport with non-linear sorption

    International Nuclear Information System (INIS)

    Noy, D.J.

    1985-03-01

    The report describes the mathematical background and data input to finite element programs for three dimensional mass transport in a porous medium. The transport equations are developed and sorption processes are included in a general way so that non-linear equilibrium relations can be introduced. The programs are described and a guide given to the construction of the required input data sets. Concluding remarks indicate that the calculations require substantial computer resources and suggest that comprehensive preliminary analysis with lower dimensional codes would be important in the assessment of field data. (author)

  3. Research on heat and mass transfer model for passive containment cooling system

    International Nuclear Information System (INIS)

    Jiang Xiaowei; Yu Hongxing; Sun Yufa; Huang Daishun

    2013-01-01

    Different with the traditional dry style containment design without external cooling, the PCCS design increased the temperature difference between the wall and the containment atmosphere significantly, and also the absolute temperature of the containment surfaces will be lower, affecting properties relevant in the condensation process. A research on the heat and mass transfer model has been done in this paper, especially the improvement on the condensation and evaporation model in the presence of noncondensable gases. Firstly, the Peterson's diffusion layer model was proved to equivalent to the stagnant film model adopted by CONTAIN code using the Clausius-Clapeyron equation, then a factor which can be used to stagnant film model was derived from the comparison between the Y.Liao's generalized diffusion layer model and the Peterson's diffusion layer model. Finally, the model in CONTAIN code used to compute the condensation and evaporation mass flux was modified using the factor, and the Wisconsin condensation tests and Westinghouse film evaporation on heated plate tests were simulated which had proved the improved model can predict more closer value of the heat and mass transfer coefficient to experimental value than original model. (authors)

  4. The complete guide to blender graphics computer modeling and animation

    CERN Document Server

    Blain, John M

    2014-01-01

    Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments

  5. Computational Models for Nonlinear Aeroelastic Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  6. The B - L scotogenic models for Dirac neutrino masses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Weijian [North China Electric Power University, Department of Physics, Baoding (China); Wang, Ruihong [Hebei Agricultural University, College of Information Science and Technology, Baoding (China); Han, Zhi-Long [University of Jinan, School of Physics and Technology, Jinan, Shandong (China); Han, Jin-Zhong [Zhoukou Normal University, School of Physics and Telecommunications Engineering, Zhoukou, Henan (China)

    2017-12-15

    We construct the one-loop and two-loop scotogenic models for Dirac neutrino mass generation in the context of U(1){sub B-L} extensions of standard model. It is indicated that the total number of intermediate fermion singlets is uniquely fixed by the anomaly free condition and the new particles may have exotic B - L charges so that the direct SM Yukawa mass term anti ν{sub L}ν{sub R}φ{sup 0} and the Majorana mass term (m{sub N}/2)ν{sub R}{sup C}ν{sub R} are naturally forbidden. After the spontaneous breaking of the U(1){sub B-L} symmetry, the discrete Z{sub 2} or Z{sub 3} symmetry appears as the residual symmetry and gives rise to the stability of intermediate fields as DM candidates. Phenomenological aspects of lepton flavor violation, DM, leptogenesis and LHC signatures are discussed. (orig.)

  7. The B-L scotogenic models for Dirac neutrino masses

    Science.gov (United States)

    Wang, Weijian; Wang, Ruihong; Han, Zhi-Long; Han, Jin-Zhong

    2017-12-01

    We construct the one-loop and two-loop scotogenic models for Dirac neutrino mass generation in the context of U(1)_{B-L} extensions of standard model. It is indicated that the total number of intermediate fermion singlets is uniquely fixed by the anomaly free condition and the new particles may have exotic B-L charges so that the direct SM Yukawa mass term \\bar{ν }_Lν _R\\overline{φ ^0} and the Majorana mass term (m_N/2)\\overline{ν _R^C}ν _R are naturally forbidden. After the spontaneous breaking of the U(1)_{B-L} symmetry, the discrete Z2 or Z3 symmetry appears as the residual symmetry and gives rise to the stability of intermediate fields as DM candidates. Phenomenological aspects of lepton flavor violation, DM, leptogenesis and LHC signatures are discussed.

  8. Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

    Science.gov (United States)

    Ditmar, P.; Hashemi Farahani, H.; Klees, R.

    2011-12-01

    Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under

  9. A CFD model for determining mixing and mass transfer in a high power agitated bioreactor

    DEFF Research Database (Denmark)

    Bach, Christian; Albæk, Mads O.; Stocks, Stuart M.

    performance of a high power agitated pilot scale bioreactor has been characterized using a novel combination of computational fluid dynamics (CFD) and experimental investigations. The effect of turbulence inside the vessel was found to be most efficiently described by using the k-ε model with regards...... simulations, and the overall mass transfer coefficient was found to be in accordance with experimental data. This work illustrates the possibility of predicting the hydrodynamic performance of an agitated bioreactor using validated CFD models. These models can be applied in the testing of new bioreactor...

  10. Editorial: Modelling and computational challenges in granular materials

    NARCIS (Netherlands)

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss

  11. Security Issues Model on Cloud Computing: A Case of Malaysia

    OpenAIRE

    Komeil Raisian; Jamaiah Yahaya

    2015-01-01

    By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security v...

  12. Fermion mass hierarchies in low-energy supergravity and superstring models

    International Nuclear Information System (INIS)

    Binetruy, P.

    1995-01-01

    We investigate the problem of the fermion mass hierarchy in supergravity models with flat directions of the scalar potential associated with some gauge singlet moduli fields. The low-energy Yukawa couplings are non-trivial homogeneous functions of the moduli and a geometric constraint between them plays, in a large class of models, a crucial role in generating hierarchies. Explicit examples are given for no-scale type supergravity models. The Yukawa couplings are dynamical variables at low energy, to be determined by a minimization process which amounts to fixing ratios of the moduli fields. The Minimal Supersymmetric Standard Model is studied and the constraints needed on the parameters in order to have a top quark much heavier than the other fermions are worked out. The bottom mass is explicitly computed and shown to be compatible with the experimental data for a large region of the parameter space. ((orig.))

  13. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  14. Segmentation and Estimation of the Histological Composition of the Tumor Mass in Computed Tomographic Images of Neuroblastoma

    National Research Council Canada - National Science Library

    Ayres, Fabio

    2001-01-01

    The problem that we investigate in the present paper Is the improvement of the analysis of the primary tumor mass, in patients with advanced neuroblastoma, using X-ray computed tomography (CT) exams...

  15. Development of a locally mass flux conservative computer code for calculating 3-D viscous flow in turbomachines

    Science.gov (United States)

    Walitt, L.

    1982-01-01

    The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.

  16. Lumped Mass Modeling for Local-Mode-Suppressed Element Connectivity

    DEFF Research Database (Denmark)

    Joung, Young Soo; Yoon, Gil Ho; Kim, Yoon Young

    2005-01-01

    connectivity parameterization (ECP) is employed. On the way to the ultimate crashworthy structure optimization, we are now developing a local mode-free topology optimization formulation that can be implemented in the ECP method. In fact, the local mode-freeing strategy developed here can be also used directly...... experiencing large structural changes, appears to be still poor. In ECP, the nodes of the domain-discretizing elements are connected by zero-length one-dimensional elastic links having varying stiffness. For computational efficiency, every elastic link is now assumed to have two lumped masses at its ends....... Choosing appropriate penalization functions for lumped mass and link stiffness is important for local mode-free results. However, unless the objective and constraint functions are carefully selected, it is difficult to obtain clear black-and-white results. It is shown that the present formulation is also...

  17. Mass Transfer Model for a Breached Waste Package

    International Nuclear Information System (INIS)

    Hsu, C.; McClure, J.

    2004-01-01

    The degradation of waste packages, which are used for the disposal of spent nuclear fuel in the repository, can result in configurations that may increase the probability of criticality. A mass transfer model is developed for a breached waste package to account for the entrainment of insoluble particles. In combination with radionuclide decay, soluble advection, and colloidal transport, a complete mass balance of nuclides in the waste package becomes available. The entrainment equations are derived from dimensionless parameters such as drag coefficient and Reynolds number and based on the assumption that insoluble particles are subjected to buoyant force, gravitational force, and drag force only. Particle size distributions are utilized to calculate entrainment concentration along with geochemistry model abstraction to calculate soluble concentration, and colloid model abstraction to calculate colloid concentration and radionuclide sorption. Results are compared with base case geochemistry model, which only considers soluble advection loss

  18. Modelling toluene oxidation : Incorporation of mass transfer phenomena

    NARCIS (Netherlands)

    Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.

    The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the

  19. Hadronic mass-relations from topological expansion and string model

    International Nuclear Information System (INIS)

    Kaidalov, A.B.

    1980-01-01

    Hadronic mass-relations from topological expansion and string model are derived. For this purpose the space- time picture of hadron interactions at high energies corresponding to planar diagrams of topological expansion is considered. Simple relations between intercepts and slopes of Regge trajectories based on the topological expansion and q anti q-string picture of hadrons are obtained [ru

  20. Renormalization of seesaw neutrino masses in the standard model ...

    Indian Academy of Sciences (India)

    the neutrino-mass-operator in the standard model with two-Higgs doublets, and also the QCD–QED ... data of atmospheric muon deficits, thereby suggesting a large mixing angle with ЖС¾. Ь ~ ... One method consists of running the gauge.

  1. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  2. A Computational Drug Metabolite Detection Using the Stable Isotopic Mass-Shift Filtering with High Resolution Mass Spectrometry in Pioglitazone and Flurbiprofen

    Directory of Open Access Journals (Sweden)

    Yohei Miyamoto

    2013-09-01

    Full Text Available The identification of metabolites in drug discovery is important. At present, radioisotopes and mass spectrometry are both widely used. However, rapid and comprehensive identification is still laborious and difficult. In this study, we developed new analytical software and employed a stable isotope as a tool to identify drug metabolites using mass spectrometry. A deuterium-labeled compound and non-labeled compound were both metabolized in human liver microsomes and analyzed by liquid chromatography/time-of-flight mass spectrometry (LC-TOF-MS. We computationally aligned two different MS data sets and filtered ions having a specific mass-shift equal to masses of labeled isotopes between those data using our own software. For pioglitazone and flurbiprofen, eight and four metabolites, respectively, were identified with calculations of mass and formulas and chemical structural fragmentation analysis. With high resolution MS, the approach became more accurate. The approach detected two unexpected metabolites in pioglitazone, i.e., the hydroxypropanamide form and the aldehyde hydrolysis form, which other approaches such as metabolite-biotransformation list matching and mass defect filtering could not detect. We demonstrated that the approach using computational alignment and stable isotopic mass-shift filtering has the ability to identify drug metabolites and is useful in drug discovery.

  3. Quark potential model of baryon spin-orbit mass splittings

    International Nuclear Information System (INIS)

    Wang Fan; Wong Chunwa

    1987-01-01

    We show that it is possible to make the P-wave spin-orbit mass splittings in Λ baryons consistent with those of nonstrange baryons in a naive quark model, but only by introducing additional terms in the quark-quark effective interaction. These terms might be related to contributions due to pomeron exchange and sea excitations. The implications of our model in meson spectroscopy and nuclear forces are discussed. (orig.)

  4. Modeling of nanofabricated paddle bridges for resonant mass sensing

    International Nuclear Information System (INIS)

    Lobontiu, N.; Ilic, B.; Garcia, E.; Reissman, T.; Craighead, H. G.

    2006-01-01

    The modeling of nanopaddle bridges is studied in this article by proposing a lumped-parameter mathematical model which enables structural characterization in the resonant domain. The distributed compliance and inertia of all three segments composing a paddle bridge are taken into consideration in order to determine the equivalent lumped-parameter stiffness and inertia fractions, and further on the bending and torsion resonant frequencies. The approximate model produces results which are confirmed by finite element analysis and experimental measurements. The model is subsequently utilized to quantify the amount of mass which attaches to the bridge by predicting the modified resonant frequencies in either bending or torsion

  5. Computational model for simulation small testing launcher, technical solution

    Energy Technology Data Exchange (ETDEWEB)

    Chelaru, Teodor-Viorel, E-mail: teodor.chelaru@upb.ro [University POLITEHNICA of Bucharest - Research Center for Aeronautics and Space, Str. Ghe Polizu, nr. 1, Bucharest, Sector 1 (Romania); Cristian, Barbu, E-mail: barbucr@mta.ro [Military Technical Academy, Romania, B-dul. George Coşbuc, nr. 81-83, Bucharest, Sector 5 (Romania); Chelaru, Adrian, E-mail: achelaru@incas.ro [INCAS -National Institute for Aerospace Research Elie Carafoli, B-dul Iuliu Maniu 220, 061126, Bucharest, Sector 6 (Romania)

    2014-12-10

    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital

  6. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  8. Mass and Heat Transfer Analysis of Membrane Humidifier with a Simple Lumped Mass Model

    International Nuclear Information System (INIS)

    Lee, Young Duk; Bae, Ho June; Ahn, Kook Young; Yu, Sang Seok; Hwang, Joon Young

    2009-01-01

    The performance of proton exchange membrane fuel cell (PEMFC) is seriously changed by the humidification condition which is intrinsic characteristics of the PEMFC. Typically, the humidification of fuel cell is carried out with internal or external humidifier. A membrane humidifier is applied to the external humidification of residential power generation fuel cell due to its convenience and high performance. In this study, a simple static model is constructed to understand the physical phenomena of the membrane humidifier in terms of geometric parameters and operating parameters. The model utilizes the concept of shell and tube heat exchanger but the model is also able to estimate the mass transport through the membrane. Model is constructed with FORTRAN under Matlab/Simulink □ environment to keep consistency with other components model which we already developed. Results shows that the humidity of wet gas and membrane thickness are critical parameters to improve the performance of the humidifier

  9. Reference absolute and indexed values for left and right ventricular volume, function and mass from cardiac computed tomography

    International Nuclear Information System (INIS)

    Stojanovska, Jadranka; Prasitdumrong, Hutsaya; Patel, Smita; Sundaram, Baskaran; Gross, Barry H.; Yilmaz, Zeynep N.; Kazerooni, Ella A.

    2014-01-01

    Left ventricular (LV) and right ventricular (RV) volumetric and functional parameters are important biomarkers for morbidity and mortality in patients with heart failure. To retrospectively determine reference mean values of LV and RV volume, function and mass normalised by age, gender and body surface area (BSA) from retrospectively electrocardiographically gated 64-slice cardiac computed tomography (CCT) by using automated analysis software in healthy adults. The study was approved by the institutional review board with a waiver of informed consent. Seventy-four healthy subjects (49% female, mean age 49.6±11) free of hypertension and hypercholesterolaemia with a normal CCT formed the study population. Analyses of LV and RV volume (end-diastolic, end-systolic and stroke volumes), function (ejection fraction), LV mass and inter-rater reproducibility were performed with commercially available analysis software capable of automated contour detection. General linear model analysis was performed to assess statistical significance by age group after adjustment for gender and BSA. Bland–Altman analysis assessed the inter-rater agreement. The reference range for LV and RV volume, function, and LV mass was normalised to age, gender and BSA. Statistically significant differences were noted between genders in both LV mass and RV volume (P-value<0.0001). Age, in concert with gender, was associated with significant differences in RV end-diastolic volume and LV ejection fraction (P-values 0.027 and 0.03). Bland–Altman analysis showed acceptable limits of agreement (±1.5% for ejection fraction) without systematic error. LV and RV volume, function and mass normalised to age, gender and BSA can be reported from CCT datasets, providing additional information important for patient management.

  10. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  11. Improved Nuclear Reactor and Shield Mass Model for Space Applications

    Science.gov (United States)

    Robb, Kevin

    2004-01-01

    New technologies are being developed to explore the distant reaches of the solar system. Beyond Mars, solar energy is inadequate to power advanced scientific instruments. One technology that can meet the energy requirements is the space nuclear reactor. The nuclear reactor is used as a heat source for which a heat-to-electricity conversion system is needed. Examples of such conversion systems are the Brayton, Rankine, and Stirling cycles. Since launch cost is proportional to the amount of mass to lift, mass is always a concern in designing spacecraft. Estimations of system masses are an important part in determining the feasibility of a design. I worked under Michael Barrett in the Thermal Energy Conversion Branch of the Power & Electric Propulsion Division. An in-house Closed Cycle Engine Program (CCEP) is used for the design and performance analysis of closed-Brayton-cycle energy conversion systems for space applications. This program also calculates the system mass including the heat source. CCEP uses the subroutine RSMASS, which has been updated to RSMASS-D, to estimate the mass of the reactor. RSMASS was developed in 1986 at Sandia National Laboratories to quickly estimate the mass of multi-megawatt nuclear reactors for space applications. In response to an emphasis for lower power reactors, RSMASS-D was developed in 1997 and is based off of the SP-100 liquid metal cooled reactor. The subroutine calculates the mass of reactor components such as the safety systems, instrumentation and control, radiation shield, structure, reflector, and core. The major improvements in RSMASS-D are that it uses higher fidelity calculations, is easier to use, and automatically optimizes the systems mass. RSMASS-D is accurate within 15% of actual data while RSMASS is only accurate within 50%. My goal this summer was to learn FORTRAN 77 programming language and update the CCEP program with the RSMASS-D model.

  12. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  13. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  14. High Mass Standard Model Higgs searches at the Tevatron

    Directory of Open Access Journals (Sweden)

    Petridis Konstantinos A.

    2012-06-01

    Full Text Available We present the results of searches for the Standard Model Higgs boson decaying predominantly to W+W− pairs, at a center-of-mass energy of √s = 1.96 TeV, using up to 8.2 fb−1 of data collected with the CDF and D0 detectors at the Fermilab Tevatron collider. The analysis techniques and the various channels considered are discussed. These searches result in exclusions across the Higgs mass range of 156.5< mH <173.7 GeV for CDF and 161< mH <170 GeV for D0.

  15. Computer-aided detection of breast masses: Four-view strategy for screening mammography

    International Nuclear Information System (INIS)

    Wei Jun; Chan Heangping; Zhou Chuan; Wu Yita; Sahiner, Berkman; Hadjiiski, Lubomir M.; Roubidoux, Marilyn A.; Helvie, Mark A.

    2011-01-01

    Purpose: To improve the performance of a computer-aided detection (CAD) system for mass detection by using four-view information in screening mammography. Methods: The authors developed a four-view CAD system that emulates radiologists' reading by using the craniocaudal and mediolateral oblique views of the ipsilateral breast to reduce false positives (FPs) and the corresponding views of the contralateral breast to detect asymmetry. The CAD system consists of four major components: (1) Initial detection of breast masses on individual views, (2) information fusion of the ipsilateral views of the breast (referred to as two-view analysis), (3) information fusion of the corresponding views of the contralateral breast (referred to as bilateral analysis), and (4) fusion of the four-view information with a decision tree. The authors collected two data sets for training and testing of the CAD system: A mass set containing 389 patients with 389 biopsy-proven masses and a normal set containing 200 normal subjects. All cases had four-view mammograms. The true locations of the masses on the mammograms were identified by an experienced MQSA radiologist. The authors randomly divided the mass set into two independent sets for cross validation training and testing. The overall test performance was assessed by averaging the free response receiver operating characteristic (FROC) curves of the two test subsets. The FP rates during the FROC analysis were estimated by using the normal set only. The jackknife free-response ROC (JAFROC) method was used to estimate the statistical significance of the difference between the test FROC curves obtained with the single-view and the four-view CAD systems. Results: Using the single-view CAD system, the breast-based test sensitivities were 58% and 77% at the FP rates of 0.5 and 1.0 per image, respectively. With the four-view CAD system, the breast-based test sensitivities were improved to 76% and 87% at the corresponding FP rates, respectively

  16. The exact mass-gap of the supersymmetric CP$^{N-1}$ sigma model

    CERN Document Server

    Evans, J M; Evans, Jonathan M; Hollowood, Timothy J

    1995-01-01

    A formula for the mass-gap of the supersymmetric \\CP^{n-1} sigma model (n > 1) in two dimensions is derived: m/\\Lambda_{\\overline{\\rm MS}}=\\sin(\\pi\\Delta)/(\\pi\\Delta) where \\Delta=1/n and m is the mass of the fundamental particle multiplet. This result is obtained by comparing two expressions for the free-energy density in the presence of a coupling to a conserved charge; one expression is computed from the exact S-matrix of K\\"oberle and Kurak via the thermodynamic Bethe ansatz and the other is computed using conventional perturbation theory. These calculations provide a stringent test of the S-matrix, showing that it correctly reproduces the universal part of the beta-function and resolving the problem of CDD ambiguities.

  17. The exact mass-gap of the supersymmetric O(N) sigma model

    CERN Document Server

    Evans, J M; Evans, Jonathan M; Hollowood, Timothy J

    1995-01-01

    A formula for the mass-gap of the supersymmetric O(N) sigma model (N>4) in two dimensions is derived: m/\\Lambda_{\\overline{\\rm MS}}=2^{2\\Delta}\\sin(\\pi\\Delta)/(\\pi\\Delta), where \\Delta=1/(N-2) and m is the mass of the fundamental vector particle in the theory. This result is obtained by comparing two expressions for the free-energy density in the presence of a coupling to a conserved charge; one expression is computed from the exact S-matrix of Shankar and Witten via the the thermodynamic Bethe ansatz and the other is computed using conventional perturbation theory. These calculations provide a stringent test of the S-matrix, showing that it correctly reproduces the universal part of the beta-function and resolving the problem of CDD ambiguities.

  18. The quark mass spectrum in the Universal Seesaw model

    International Nuclear Information System (INIS)

    Ranfone, S.

    1993-03-01

    In the context of a Universal Seesaw model implemented in a left-right symmetric theory, we show that, by allowing the two left-handed doublet Higgs fields to develop different vacuum-expectation-values (VEV's), it is possible to account for the observed structure of the quark mass spectrum without the need of any hierarchy among the Yukawa couplings. In this framework the top-quark mass is expected to be of the order of its present experimental lower bound, m t ≅ 90 to 100 GeV. Moreover, we find that, while one of the Higgs doublets gets essentially the standard model VEV of approximately 250 GeV, the second doublet is expected to have a much smaller VEV, of order 10 GeV. The identification of the large mass scale of the model with the Peccei-Quinn scale fixes the mass of the right-handed gauge bosons in the range 10 7 to 10 10 GeV, far beyond the reach of present collider experiments. (author)

  19. Computer-aided diagnosis of mammographic masses using geometric verification-based image retrieval

    Science.gov (United States)

    Li, Qingliang; Shi, Weili; Yang, Huamin; Zhang, Huimao; Li, Guoxin; Chen, Tao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Computer-Aided Diagnosis of masses in mammograms is an important indicator of breast cancer. The use of retrieval systems in breast examination is increasing gradually. In this respect, the method of exploiting the vocabulary tree framework and the inverted file in the mammographic masse retrieval have been proved high accuracy and excellent scalability. However it just considered the features in each image as a visual word and had ignored the spatial configurations of features. It greatly affect the retrieval performance. To overcome this drawback, we introduce the geometric verification method to retrieval in mammographic masses. First of all, we obtain corresponding match features based on the vocabulary tree framework and the inverted file. After that, we grasps the main point of local similarity characteristic of deformations in the local regions by constructing the circle regions of corresponding pairs. Meanwhile we segment the circle to express the geometric relationship of local matches in the area and generate the spatial encoding strictly. Finally we judge whether the matched features are correct or not, based on verifying the all spatial encoding are whether satisfied the geometric consistency. Experiments show the promising results of our approach.

  20. Ocean Modeling and Visualization on Massively Parallel Computer

    Science.gov (United States)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  1. Theoretical studies on membrane-based gas separation using computational fluid dynamics (CFD) of mass transfer

    International Nuclear Information System (INIS)

    Sohrabi, M.R.; Marjani, A.; Davallo, M.; Moradi, S.; Shirazian, S.

    2011-01-01

    A 2D mass transfer model was developed to study carbon dioxide removal by absorption in membrane contactors. The model predicts the steady state absorbent and carbon dioxide concentrations in the membrane by solving the conservation equations. The continuity equations for three sub domains of the membrane contactor involving the tube; membrane and shell were obtained and solved by finite element method (FEM). The model was based on 'non-wetted mode' in which the gas phase filled the membrane pores. Laminar parabolic velocity profile was used for the liquid flow in the tube side; whereas, the gas flow in the shell side was characterized by Happel's free surface model. Axial and radial diffusion transport inside the shell, through the membrane, and within the tube side of the contactor was considered in the mass transfer model. The predictions of percent CO/sub 2/ removal obtained by modeling were compared with the experimental values obtained from literature. They were the experimental results for CO/sub 2/ removal from CO/sub 2//N/sub 2/ gas mixture with amines aqueous solutions as the liquid solvent using polypropylene membrane contactor. The modeling predictions were in good agreement with the experimental values for different values of gas and liquid flow rates. (author)

  2. Computer model for economic study of unbleached kraft paperboard production

    Science.gov (United States)

    Peter J. Ince

    1984-01-01

    Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...

  3. Airfoil Computations using the γ - Reθ Model

    DEFF Research Database (Denmark)

    Sørensen, Niels N.

    computations. Based on this, an estimate of the error in the computations is determined to be approximately one percent in the attached region. Following the verification of the implemented model, the model is applied to four airfoils, NACA64- 018, NACA64-218, NACA64-418 and NACA64-618 and the results...

  4. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  5. Porous media fluid flow, heat, and mass transport model with rock stress coupling

    International Nuclear Information System (INIS)

    Runchal, A.K.

    1980-01-01

    This paper describes the physical and mathematical basis of a general purpose porous media flow model, GWTHERM. The mathematical basis of the model is obtained from the coupled set of the classical governing equations for the mass, momentum and energy balance. These equations are embodied in a computational model which is then coupled externally to a linearly elastic rock-stress model. This coupling is rather exploratory and based upon empirical correlations. The coupled model is able to take account of time-dependent, inhomogeneous and anisotropic features of the hydrogeologic, thermal and transport phenomena. A number of applications of the model have been made. Illustrations from the application of the model to nuclear waste repositories are included

  6. Prolegomena to any future computer evaluation of the QCD mass spectrum

    International Nuclear Information System (INIS)

    Parisi, G.

    1984-01-01

    In recent years we have seen many computer based evaluations of the QCD mass spectrum. At the present moment a reliable control of the systematic errors is not yet achieved; as far as the main sources of systematic errors are the non zero values of the lattice spacing and the finite size of the box, in which the hadrons are confined, we need to do extensive computations on lattices of different shapes in order to be able to extrapolate to zero lattice spacing and to infinite box. While it is necessary to go to larger lattices, we also need efficient algorithms in order to minimize the statistical and systematic errors and to decrease the CPU time (and the memory) used in the computation. In these lectures the reader will find a review of the most common algorithms (with the exclusion of the application to gauge theories of the hopping parameter expansion in the form proposed: it can be found in Montvay's contribution to this school); the weak points of the various algorithms are discussed and, when possible, the way to improve them is suggested. For reader convenience the basic formulae are recalled in the second section; in section three we find a discussion of finite volume effects, while the effects of a finite lattice spacing are discussed in section four; some techniques for fighting against the statistical errors and the critical slowing down are found in section five and six respectively. Finally the conclusions are in section seven

  7. Computational intelligence applications in modeling and control

    CERN Document Server

    Vaidyanathan, Sundarapandian

    2015-01-01

    The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...

  8. A High-Resolution Model of Water Mass Transformation and Transport in the Weddell Sea

    Science.gov (United States)

    Hazel, J.; Stewart, A.

    2016-12-01

    The ocean circulation around the Antarctic margins has a pronounced impact on the global ocean and climate system. One of these impacts includes closing the global meridional overturning circulation (MOC) via formation of dense Antarctic Bottom Water (AABW), which ventilates a large fraction of the subsurface ocean. AABW is also partially composed of modified Circumpolar Deep Water (CDW), a warm, mid-depth water mass whose transport towards the continent has the potential to induce rapid retreat of marine-terminating glaciers. Previous studies suggest that these water mass exchanges may be strongly influenced by high-frequency processes such as downslope gravity currents, tidal flows, and mesoscale/submesoscale eddy transport. However, evaluating the relative contributions of these processes to near-Antarctic water mass transports is hindered by the region's relatively small scales of motion and the logistical difficulties in taking measurements beneath sea ice.In this study we develop a regional model of the Weddell Sea, the largest established source of AABW. The model is forced by an annually-repeating atmospheric state constructed from the Antarctic Mesoscale Prediction System data and by annually-repeating lateral boundary conditions constructed from the Southern Ocean State Estimate. The model incorporates the full Filchner-Ronne cavity and simulates the thermodynamics and dynamics of sea ice. To analyze the role of high-frequency processes in the transport and transformation of water masses, we compute the model's overturning circulation, water mass transformations, and ice sheet basal melt at model horizontal grid resolutions ranging from 1/2 degree to 1/24 degree. We temporally decompose the high-resolution (1/24 degree) model circulation into components due to mean, eddy and tidal flows and discuss the geographical dependence of these processes and their impact on water mass transformation and transport.

  9. COMPUTATIONAL MODELING OF AIRFLOW IN NONREGULAR SHAPED CHANNELS

    Directory of Open Access Journals (Sweden)

    A. A. Voronin

    2013-05-01

    Full Text Available The basic approaches to computational modeling of airflow in the human nasal cavity are analyzed. Different models of turbulent flow which may be used in order to calculate air velocity and pressure are discussed. Experimental measurement results of airflow temperature are illustrated. Geometrical model of human nasal cavity reconstructed from computer-aided tomography scans and numerical simulation results of airflow inside this model are also given. Spatial distributions of velocity and temperature for inhaled and exhaled air are shown.

  10. Nonuniversal gaugino masses from nonsinglet F-terms in nonminimal unified models

    International Nuclear Information System (INIS)

    Martin, Stephen P.

    2009-01-01

    In phenomenological studies of low-energy supersymmetry, running gaugino masses are often taken to be equal near the scale of apparent gauge coupling unification. However, many known mechanisms can avoid this universality, even in models with unified gauge interactions. One example is an F-term vacuum expectation value that is a singlet under the standard model gauge group but transforms nontrivially in the symmetric product of two adjoint representations of a group that contains the standard model gauge group. Here, I compute the ratios of gaugino masses that follow from F-terms in nonsinglet representations of SO(10) and E 6 and their subgroups, extending well-known results for SU(5). The SO(10) results correct some long-standing errors in the literature.

  11. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development

    Science.gov (United States)

    Xu, Fei; Zhang, Yaning; Jin, Guangri; Li, Bingxi; Kim, Yong-Song; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    A three-phase model capable of predicting the heat transfer and moisture migration for soil freezing process was developed based on the Shen-Chen model and the mechanisms of heat and mass transfer in unsaturated soil freezing. The pre-melted film was taken into consideration, and the relationship between film thickness and soil temperature was used to calculate the liquid water fraction in both frozen zone and freezing fringe. The force that causes the moisture migration was calculated by the sum of several interactive forces and the suction in the pre-melted film was regarded as an interactive force between ice and water. Two kinds of resistance were regarded as a kind of body force related to the water films between the ice grains and soil grains, and a block force instead of gravity was introduced to keep balance with gravity before soil freezing. Lattice Boltzmann method was used in the simulation, and the input variables for the simulation included the size of computational domain, obstacle fraction, liquid water fraction, air fraction and soil porosity. The model is capable of predicting the water content distribution along soil depth and variations in water content and temperature during soil freezing process.

  12. Model Checking Quantified Computation Tree Logic

    NARCIS (Netherlands)

    Rensink, Arend; Baier, C; Hermanns, H.

    2006-01-01

    Propositional temporal logic is not suitable for expressing properties on the evolution of dynamically allocated entities over time. In particular, it is not possible to trace such entities through computation steps, since this requires the ability to freely mix quantification and temporal

  13. Computational compliance criteria in water hammer modelling

    Directory of Open Access Journals (Sweden)

    Urbanowicz Kamil

    2017-01-01

    Full Text Available Among many numerical methods (finite: difference, element, volume etc. used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC, which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  14. Computational compliance criteria in water hammer modelling

    Science.gov (United States)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  15. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  16. Mass transfer models analysis for the structured packings

    International Nuclear Information System (INIS)

    Suastegui R, A.O.

    1997-01-01

    The models that have been developing, to understand the mechanism of the mass transfer through the structured packings, present limitations for their application, existing then uncertainty in order to use them in the chemical industrial processes. In this study the main parameters used in the mass transfer are: the hydrodynamic of the bed of the column, the geometry of the bed, physical-chemical properties of the mixture and the flow regime of the operation between the flows liquid-gas. The sensibility of each one of these parameters generate an arduous work to develop right proposals and good interpretation of the phenomenon. With the purpose of showing the importance of these parameters mentioned in the mass transfer, this work is analyzed the process of absorption for the system water-air, using the models to the structured packings in packed columns. The models selected were developed by Bravo and collaborators in 1985 and 1992, in order to determine the parameters previous mentioned for the system water-air, using a structured packing built in the National Institute of Nuclear Research. In this work is showed the results of the models application and their discussion. (Author)

  17. Above the cloud computing: applying cloud computing principles to create an orbital services model

    Science.gov (United States)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft

  18. Infinite nuclear matter model and mass formulae for nuclei

    International Nuclear Information System (INIS)

    Satpathy, L.

    2016-01-01

    The matter composed of the nucleus is a quantum-mechanical interacting many-fermionic system. However, the shell and classical liquid drop have been taken as the two main features of nuclear dynamics, which have guided the evolution of nuclear physics. These two features can be considered as the macroscopic manifestation of the microscopic dynamics of the nucleons at fundamental level. Various mass formulae have been developed based on either of these features over the years, resulting in many ambiguities and uncertainties posing many challenges in this field. Keeping this in view, Infinite Nuclear Matter (INM) model has been developed during last couple of decades with a many-body theoretical foundation employing the celebrated Hugenholtz-Van Hove theorem, quite appropriate for the interacting quantum-mechanical nuclear system. A mass formula called INM mass formula based on this model yields rms deviation of 342 keV being the lowest in literature. Some of the highlights of its result includes its determination of INM density in agreement with the electron scattering data leading to the resolution of the long standing 'r 0 -paradox' it predicts new magic numbers giving rise to new island of stability in the drip-line regions. This is the manifestation of a new phenomenon where shell-effect over comes the repulsive component of nucleon-nucleon force resulting in the broadening of the stability peninsula. Shell quenching in N= 82,and N= 126 shells, and several islands of inversion have been predicted. The model determines the empirical value of the nuclear compression modulus, using high precission 4500 data comprising nuclear masses, neutron and proton separation energies. The talk will give a critical review of the field of mass formula and our understanding of nuclear dynamics as a whole

  19. Models of mass segregation at the Galactic Centre

    International Nuclear Information System (INIS)

    Freitag, Marc; Amaro-Seoane, Pau; Kalogera, Vassiliki

    2006-01-01

    We study the process of mass segregation through 2-body relaxation in galactic nuclei with a central massive black hole (MBH). This study has bearing on a variety of astrophysical questions, from the distribution of X-ray binaries at the Galactic centre, to tidal disruptions of main- sequence and giant stars, to inspirals of compact objects into the MBH, an important category of events for the future space borne gravitational wave interferometer LISA. In relatively small galactic nuclei, typical hosts of MBHs with masses in the range 10 4 - 10 7 M o-dot , the relaxation induces the formation of a steep density cusp around the MBH and strong mass segregation. Using a spherical stellar dynamical Monte-Carlo code, we simulate the long-term relaxational evolution of galactic nucleus models with a spectrum of stellar masses. Our focus is the concentration of stellar black holes to the immediate vicinity of the MBH. Special attention is given to models developed to match the conditions in the Milky Way nucleus

  20. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    DEFF Research Database (Denmark)

    Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local f...... in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo....

  1. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  2. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  3. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  4. Generalized one-loop neutrino mass model with charged particles

    Science.gov (United States)

    Cheung, Kingman; Okada, Hiroshi

    2018-04-01

    We propose a radiative neutrino-mass model by introducing 3 generations of fermion pairs E-(N +1 )/2E+(N +1 )/2 and a couple of multicharged bosonic doublet fields ΦN /2,ΦN /2 +1, where N =1 , 3, 5, 7, 9. We show that the models can satisfy the neutrino masses and oscillation data, and are consistent with lepton-flavor violations, the muon anomalous magnetic moment, the oblique parameters, and the beta function of the U (1 )Y hypercharge gauge coupling. We also discuss the collider signals for various N , namely, multicharged leptons in the final state from the Drell-Yan production of E-(N +1 )/2E+(N +1 )/2. In general, the larger the N the more charged leptons will appear in the final state.

  5. Computer modeling of ORNL storage tank sludge mobilization and mixing

    International Nuclear Information System (INIS)

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks

  6. Computing Models of M-type Host Stars and their Panchromatic Spectral Output

    Science.gov (United States)

    Linsky, Jeffrey; Tilipman, Dennis; France, Kevin

    2018-06-01

    We have begun a program of computing state-of-the-art model atmospheres from the photospheres to the coronae of M stars that are the host stars of known exoplanets. For each model we are computing the emergent radiation at all wavelengths that are critical for assessingphotochemistry and mass-loss from exoplanet atmospheres. In particular, we are computing the stellar extreme ultraviolet radiation that drives hydrodynamic mass loss from exoplanet atmospheres and is essential for determing whether an exoplanet is habitable. The model atmospheres are computed with the SSRPM radiative transfer/statistical equilibrium code developed by Dr. Juan Fontenla. The code solves for the non-LTE statistical equilibrium populations of 18,538 levels of 52 atomic and ion species and computes the radiation from all species (435,986 spectral lines) and about 20,000,000 spectral lines of 20 diatomic species.The first model computed in this program was for the modestly active M1.5 V star GJ 832 by Fontenla et al. (ApJ 830, 152 (2016)). We will report on a preliminary model for the more active M5 V star GJ 876 and compare this model and its emergent spectrum with GJ 832. In the future, we will compute and intercompare semi-empirical models and spectra for all of the stars observed with the HST MUSCLES Treasury Survey, the Mega-MUSCLES Treasury Survey, and additional stars including Proxima Cen and Trappist-1.This multiyear theory program is supported by a grant from the Space Telescope Science Institute.

  7. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  8. GUT and flavor models for neutrino masses and mixing

    Science.gov (United States)

    Meloni, Davide

    2017-10-01

    In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.

  9. New Constraints on the running-mass inflation model

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro

    2002-01-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...

  10. The running-mass inflation model and WMAP

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...

  11. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  12. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  13. Patentability aspects of computational cancer models

    Science.gov (United States)

    Lishchuk, Iryna

    2017-07-01

    Multiscale cancer models, implemented in silico, simulate tumor progression at various spatial and temporal scales. Having the innovative substance and possessing the potential of being applied as decision support tools in clinical practice, patenting and obtaining patent rights in cancer models seems prima facie possible. What legal hurdles the cancer models need to overcome for being patented we inquire from this paper.

  14. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    Science.gov (United States)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  15. Computation of fluid flow in distending tunnels with mass, momentum and energy exchange with the walls

    Energy Technology Data Exchange (ETDEWEB)

    Maw, J R [AWRE, Aldermaston (United Kingdom)

    1970-05-01

    When calculating the effects of an underground explosion it may be useful to be able to calculate the flow of the very hot gaseous products along pipes or tunnels. For example it might be possible to treat a fault in the surrounding rock as an idealised pipe forced open by the high pressure generated by the explosion. Another possibility might be the use of a specially constructed tunnel to channel the energy released in some preferred direction. In such cases the gas flow is complicated by several phenomena. The cross section of the pipe may vary with axial distance and also distend with time. Heat will be lost to the walls of the pipe which may be ablated leading to entrainment of wall material into the gas flow. In addition wall friction will tend to retard the flow. This paper describes a simple computer program, HAT, which was written to calculate such flows. The flow is assumed to be quasi-one-dimensional in that flow quantities such as pressure density and axial velocity do not vary across the pipe. However the radius of the pipe may vary both with axial distance and with time. Sources, or sinks of mass, momentum and energy are included in the governing equations which allow simulation of the phenomena described above. The governing equations are derived in Eulerian form and approximated using an extension of the finite difference scheme of Lax. A brief outline of the computational procedure is given. To demonstrate the capabilities and assess the accuracy of the program two simple problems are calculated using HAT (i) The motion of a shock along a converging pipe. (ii) The effect of mass addition through the walls on the motion of a shock along a uniform pipe. In both cases results obtained using HAT are compared with theoretical analyses of the motion.

  16. Introduction to computation and modeling for differential equations

    CERN Document Server

    Edsberg, Lennart

    2008-01-01

    An introduction to scientific computing for differential equationsIntroduction to Computation and Modeling for Differential Equations provides a unified and integrated view of numerical analysis, mathematical modeling in applications, and programming to solve differential equations, which is essential in problem-solving across many disciplines, such as engineering, physics, and economics. This book successfully introduces readers to the subject through a unique ""Five-M"" approach: Modeling, Mathematics, Methods, MATLAB, and Multiphysics. This approach facilitates a thorough understanding of h

  17. Introduction to numerical modeling of thermohydrologic flow in fractured rock masses

    International Nuclear Information System (INIS)

    Wang, J.S.Y.

    1980-01-01

    More attention is being given to the possibility of nuclear waste isolation in hard rock formations. The waste will generate heat which raises the temperature of the surrounding fractured rock masses and induces buoyancy flow and pressure change in the fluid. These effects introduce the potential hazard of radionuclides being carried to the biosphere, and affect the structure of a repository by stress changes in the rock formation. The thermohydrological and thermomechanical responses are determined by the fractures as well as the intact rock blocks. The capability of modeling fractured rock masses is essential to site characterization and repository evaluation. The fractures can be modeled either as a discrete system, taking into account the detailed fracture distributions, or as a continuum representing the spatial average of the fractures. A numerical model is characterized by the governing equations, the numerical methods, the computer codes, the validations, and the applications. These elements of the thermohydrological models are discussed. Along with the general review, some of the considerations in modeling fractures are also discussed. Some remarks on the research needs in modeling fractured rock mass conclude the paper

  18. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  19. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  20. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.