WorldWideScience

Sample records for simplified computational approach

  1. A simplified approach to evaluating severe accident source term for PWR

    International Nuclear Information System (INIS)

    Huang, Gaofeng; Tong, Lili; Cao, Xuewu

    2014-01-01

    Highlights: • Traditional source term evaluation approaches have been studied. • A simplified approach of source term evaluation for 600 MW PWR is studied. • Five release categories are established. - Abstract: For early design of NPPs, no specific severe accident source term evaluation was considered. Some general source terms have been used for some NPPs. In order to implement a best estimate, a special source term evaluation should be implemented for an NPP. Traditional source term evaluation approaches (mechanism approach and parametric approach) have some difficulties associated with their implementation. The traditional approaches are not consistent with cost-benefit assessment. A simplified approach for evaluating severe accident source term for PWR is studied. For the simplified approach, a simplified containment event tree is established. According to representative cases selection, weighted coefficient evaluation, computation of representative source term cases and weighted computation, five containment release categories are established, including containment bypass, containment isolation failure, containment early failure, containment late failure and intact containment

  2. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  3. A simplified approach for simulation of wake meandering

    Energy Technology Data Exchange (ETDEWEB)

    Thomsen, Kenneth; Aagaard Madsen, H.; Larsen, Gunner; Juul Larsen, T.

    2006-03-15

    This fact-sheet describes a simplified approach for a part of the recently developed dynamic wake model for aeroelastic simulations for wind turbines operating in wake. The part described in this fact-sheet concern the meandering process only, while the other part of the simplified approach the wake deficit profile is outside the scope of the present fact-sheet. Work on simplified models for the wake deficit profile is ongoing. (au)

  4. A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets

    Science.gov (United States)

    Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele

    2016-12-01

    Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.

  5. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  6. Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry

    International Nuclear Information System (INIS)

    Salari, K.; McWherter-Payne, M.

    2003-01-01

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces

  7. Delayed ripple counter simplifies square-root computation

    Science.gov (United States)

    Cliff, R.

    1965-01-01

    Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.

  8. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    Science.gov (United States)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-01-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405

  9. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    Science.gov (United States)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  10. Simplified probabilistic approach to determine safety factors in deterministic flaw acceptance criteria

    International Nuclear Information System (INIS)

    Barthelet, B.; Ardillon, E.

    1997-01-01

    The flaw acceptance rules in nuclear components rely on deterministic criteria supposed to ensure the safe operating of plants. The interest of having a reliable method of evaluating the safety margins and the integrity of components led Electricite de France to launch a study to link safety factors with requested reliability. A simplified analytical probabilistic approach is developed to analyse the failure risk in Fracture Mechanics. Assuming lognormal distributions of the main random variables, it is possible considering a simple Linear Elastic Fracture Mechanics model, to determine the failure probability as a function of mean values and logarithmic standard deviations. The 'design' failure point can be analytically calculated. Partial safety factors on the main variables (stress, crack size, material toughness) are obtained in relation with reliability target values. The approach is generalized to elastic plastic Fracture Mechanics (piping) by fitting J as a power law function of stress, crack size and yield strength. The simplified approach is validated by detailed probabilistic computations with PROBAN computer program. Assuming reasonable coefficients of variations (logarithmic standard deviations), the method helps to calibrate safety factors for different components taking into account reliability target values in normal, emergency and faulted conditions. Statistical data for the mechanical properties of the main basic materials complement the study. The work involves laboratory results and manufacture data. The results of this study are discussed within a working group of the French in service inspection code RSE-M. (authors)

  11. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within

  12. A simplified model for computing equation of state of argon plasma

    International Nuclear Information System (INIS)

    Wang Caixia; Tian Yangmeng

    2006-01-01

    The paper present a simplified new model of computing equation of state and ionization degree of Argon plasma, which based on Thomas-Fermi (TF) statistical model: the authors fitted the numerical results of the ionization potential calculated by Thomas-Fermi statistical model and gained the analytical function of the potential versus the degree of ionization, then calculated the ionization potential and the average degree of ionization for Argon versus temperature and density in local thermal equilibrium case at 10-1000 eV. The results calculated of this simplified model are basically in agreement with several sets of theory data and experimental data. This simplified model can be used to calculation of the equation of state of plasmas mixture and is expected to have a more wide use in the field of EML technology involving the strongly ionized plasmas. (authors)

  13. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Science.gov (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  14. Steam generator transient studies using a simplified two-fluid computer code

    International Nuclear Information System (INIS)

    Munshi, P.; Bhatnagar, R.; Ram, K.S.

    1985-01-01

    A simplified two-fluid computer code has been used to simulate reactor-side (or primary-side) transients in a PWR steam generator. The disturbances are modelled as ramp inputs for pressure, internal energy and mass flow-rate for the primary fluid. The CPU time for a transient duration of 4 s is approx. 10 min on a DEC-1090 computer system. The results are thermodynamically consistent and encouraging for further studies. (author)

  15. Heterogeneous Computing in Economics: A Simplified Approach

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a ...

  16. A simplified approach for the computation of steady two-phase flow in inverted siphons.

    Science.gov (United States)

    Diogo, A Freire; Oliveira, Maria C

    2016-01-15

    Hydraulic, sanitary, and sulfide control conditions of inverted siphons, particularly in large wastewater systems, can be substantially improved by continuous air injection in the base of the inclined rising branch. This paper presents a simplified approach that was developed for the two-phase flow of the rising branch using the energy equation for a steady pipe flow, based on the average fluid fraction, observed slippage between phases, and isothermal assumption. As in a conventional siphon design, open channel steady uniform flow is assumed in inlet and outlet chambers, corresponding to the wastewater hydraulic characteristics in the upstream and downstream sewers, and the descending branch operates in steady uniform single-phase pipe flow. The proposed approach is tested and compared with data obtained in an experimental siphon setup with two plastic barrels of different diameters operating separately as in a single-barrel siphon. Although the formulations developed are very simple, the results show a good adjustment for the set of the parameters used and conditions tested and are promising mainly for sanitary siphons with relatively moderate heights of the ascending branch. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Сlassification of methods of production of computer forensic by usage approach of graph theory

    OpenAIRE

    Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov

    2016-01-01

    Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  18. Сlassification of methods of production of computer forensic by usage approach of graph theory

    Directory of Open Access Journals (Sweden)

    Anna Ravilyevna Smolina

    2016-06-01

    Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  19. MAT-FLX: a simplified code for computing material balances in fuel cycle

    International Nuclear Information System (INIS)

    Pierantoni, F.; Piacentini, F.

    1983-01-01

    This work illustrates a calculation code designed to provide a materials balance for the electro nuclear fuel cycle. The calculation method is simplified but relatively precise and employs a progressive tabulated data approach

  20. Utilization of handheld computing to simplify compliance

    International Nuclear Information System (INIS)

    Galvin, G.; Rasmussen, J.; Haines, A.

    2008-01-01

    Monitoring job site performance and building a continually improving organization is an ongoing challenge for operators of process and power generation facilities. Stakeholders need to accurately capture records of quality and safety compliance, job progress, and operational experiences (OPEX). This paper explores the use of technology-enabled processes as a means for simplifying compliance to quality, safety, administrative, maintenance and operations activities. The discussion will explore a number of emerging technologies and their application to simplifying task execution and process compliance. This paper will further discuss methodologies to further refine processes through trending improvements in compliance and continually optimizing and simplifying through the use of technology. (author)

  1. Image-Based Edge Bundles : Simplified Visualization of Large Graphs

    NARCIS (Netherlands)

    Telea, A.; Ersoy, O.

    2010-01-01

    We present a new approach aimed at understanding the structure of connections in edge-bundling layouts. We combine the advantages of edge bundles with a bundle-centric simplified visual representation of a graph's structure. For this, we first compute a hierarchical edge clustering of a given graph

  2. Quantitative whole body scintigraphy - a simplified approach

    International Nuclear Information System (INIS)

    Marienhagen, J.; Maenner, P.; Bock, E.; Schoenberger, J.; Eilles, C.

    1996-01-01

    In this paper we present investigations on a simplified method of quantitative whole body scintigraphy by using a dual head LFOV-gamma camera and a calibration algorithm without the need of additional attenuation or scatter correction. Validation of this approach to the anthropomorphic phantom as well as in patient studies showed a high accuracy concerning quantification of whole body activity (102.8% and 97.72%, resp.), by contrast organ activities were recovered with an error range up to 12%. The described method can be easily performed using commercially available software packages and is recommendable especially for quantitative whole body scintigraphy in a clinical setting. (orig.) [de

  3. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  4. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    Science.gov (United States)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  5. A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach

    International Nuclear Information System (INIS)

    Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent

    2016-01-01

    The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-,.., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results. (orig.)

  6. Non-intrusive speech quality assessment in simplified e-model

    OpenAIRE

    Vozňák, Miroslav

    2012-01-01

    The E-model brings a modern approach to the computation of estimated quality, allowing for easy implementation. One of its advantages is that it can be applied in real time. The method is based on a mathematical computation model evaluating transmission path impairments influencing speech signal, especially delays and packet losses. These parameters, common in an IP network, can affect speech quality dramatically. The paper deals with a proposal for a simplified E-model and its pr...

  7. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  8. A simplified approach to detect undervoltage tripping of wind generators

    Energy Technology Data Exchange (ETDEWEB)

    Sigrist, Lukas; Rouco, Luis [Universidad Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica

    2012-07-01

    This paper proposes a simplified but fast approach based on a Norton equivalent of wind generators to detect undervoltage tripping of wind generators. This approach is successfully applied to a real wind farm. The relevant grid code requires the wind farm to withstand a voltage dip of 0% retained voltage. The ability of the wind generators to raise the voltage supplying reactive current and to avoid undervoltage tripping is investigated. The obtained results are also compared with the results obtained from detailed dynamic simulations, which make use of wind generator models complying with the relevant grid code. (orig.)

  9. Simplified dynamic analysis to evaluate liquefaction-induced lateral deformation of earth slopes: a computational fluid dynamics approach

    Science.gov (United States)

    Jafarian, Yaser; Ghorbani, Ali; Ahmadi, Omid

    2014-09-01

    Lateral deformation of liquefiable soil is a cause of much damage during earthquakes, reportedly more than other forms of liquefaction-induced ground failures. Researchers have presented studies in which the liquefied soil is considered as viscous fluid. In this manner, the liquefied soil behaves as non-Newtonian fluid, whose viscosity decreases as the shear strain rate increases. The current study incorporates computational fluid dynamics to propose a simplified dynamic analysis for the liquefaction-induced lateral deformation of earth slopes. The numerical procedure involves a quasi-linear elastic model for small to moderate strains and a Bingham fluid model for large strain states during liquefaction. An iterative procedure is considered to estimate the strain-compatible shear stiffness of soil. The post-liquefaction residual strength of soil is considered as the initial Bingham viscosity. Performance of the numerical procedure is examined by using the results of centrifuge model and shaking table tests together with some field observations of lateral ground deformation. The results demonstrate that the proposed procedure predicts the time history of lateral ground deformation with a reasonable degree of precision.

  10. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  11. Simplified expressions of the T-matrix integrals for electromagnetic scattering.

    Science.gov (United States)

    Somerville, Walter R C; Auguié, Baptiste; Le Ru, Eric C

    2011-09-01

    The extended boundary condition method, also called the null-field method, provides a semianalytic solution to the problem of electromagnetic scattering by a particle by constructing a transition matrix (T-matrix) that links the scattered field to the incident field. This approach requires the computation of specific integrals over the particle surface, which are typically evaluated numerically. We introduce here a new set of simplified expressions for these integrals in the commonly studied case of axisymmetric particles. Simplifications are obtained using the differentiation properties of the radial functions (spherical Bessel) and angular functions (associated Legendre functions) and integrations by parts. The resulting simplified expressions not only lead to faster computations, but also reduce the risks of loss of precision and provide a simpler framework for further analytical work.

  12. Input data instructions - simplified documentation of the computer program ANSYS. Report for 10 June 1976--31 March 1978

    International Nuclear Information System (INIS)

    Chang, P.Y.

    1978-02-01

    A simplified version of the input instructions for the computer program 'ANSYS' is presented for the non-linear elastoplastic analysis of a ship collision protection barrier structure. All essential information necessary for the grillage model are summarized while eliminating the instructions for other types of the problems. A benchmark example is given for checking the computer program

  13. High-performance computational fluid dynamics: a custom-code approach

    International Nuclear Information System (INIS)

    Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain

    2016-01-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)

  14. High-performance computational fluid dynamics: a custom-code approach

    Science.gov (United States)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  15. BrainSignals Revisited: Simplifying a Computational Model of Cerebral Physiology.

    Directory of Open Access Journals (Sweden)

    Matthew Caldwell

    Full Text Available Multimodal monitoring of brain state is important both for the investigation of healthy cerebral physiology and to inform clinical decision making in conditions of injury and disease. Near-infrared spectroscopy is an instrument modality that allows non-invasive measurement of several physiological variables of clinical interest, notably haemoglobin oxygenation and the redox state of the metabolic enzyme cytochrome c oxidase. Interpreting such measurements requires the integration of multiple signals from different sources to try to understand the physiological states giving rise to them. We have previously published several computational models to assist with such interpretation. Like many models in the realm of Systems Biology, these are complex and dependent on many parameters that can be difficult or impossible to measure precisely. Taking one such model, BrainSignals, as a starting point, we have developed several variant models in which specific regions of complexity are substituted with much simpler linear approximations. We demonstrate that model behaviour can be maintained whilst achieving a significant reduction in complexity, provided that the linearity assumptions hold. The simplified models have been tested for applicability with simulated data and experimental data from healthy adults undergoing a hypercapnia challenge, but relevance to different physiological and pathophysiological conditions will require specific testing. In conditions where the simplified models are applicable, their greater efficiency has potential to allow their use at the bedside to help interpret clinical data in near real-time.

  16. 20 CFR 404.241 - 1977 simplified old-start method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false 1977 simplified old-start method. 404.241... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary Insurance Amounts § 404.241 1977 simplified old-start method. (a) Who is qualified. To qualify for the old...

  17. CRUSH1: a simplified computer program for impact analysis of radioactive material transport casks

    Energy Technology Data Exchange (ETDEWEB)

    Ikushima, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-07-01

    In drop impact analyses for radioactive transport casks, it has become possible to perform them in detail by using interaction evaluation, computer programs, such as DYNA2D, DYNA3D, PISCES and HONDO. However, the considerable cost and computer time are necessitated to perform analyses by these programs. To meet the above requirements, a simplified computer program CRUSH1 has been developed. The CRUSH1 is a static calculation computer program capable of evaluating the maximum acceleration of cask bodies and the maximum deformation of shock absorbers using an Uniaxial Displacement Method (UDM). The CRUSH1 is a revised version of the CRUSH. Main revisions of the computer program are as follows; (1) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows 3.1 or Windows NT) are available for use of the CRUSH1 and (2) input data set are revised. In the paper, brief illustration of calculation method using UDM is presented. The second section presents comparisons between UDM and the detailed method. The third section provides a use`s guide for CRUSH1. (author)

  18. CRUSH1: a simplified computer program for impact analysis of radioactive material transport casks

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1996-07-01

    In drop impact analyses for radioactive transport casks, it has become possible to perform them in detail by using interaction evaluation, computer programs, such as DYNA2D, DYNA3D, PISCES and HONDO. However, the considerable cost and computer time are necessitated to perform analyses by these programs. To meet the above requirements, a simplified computer program CRUSH1 has been developed. The CRUSH1 is a static calculation computer program capable of evaluating the maximum acceleration of cask bodies and the maximum deformation of shock absorbers using an Uniaxial Displacement Method (UDM). The CRUSH1 is a revised version of the CRUSH. Main revisions of the computer program are as follows; (1) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows 3.1 or Windows NT) are available for use of the CRUSH1 and (2) input data set are revised. In the paper, brief illustration of calculation method using UDM is presented. The second section presents comparisons between UDM and the detailed method. The third section provides a use's guide for CRUSH1. (author)

  19. A simplified approach for the coupling of excitation energy transfer

    Energy Technology Data Exchange (ETDEWEB)

    Shi Bo [Hefei National Laboratory for Physical Science at Microscale, University of Science and Technology of China, Hefei 230026 (China); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Gao Fang, E-mail: gaofang@iim.ac.cn [Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031 (China); State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); Liang Wanzhen [Hefei National Laboratory for Physical Science at Microscale, University of Science and Technology of China, Hefei 230026 (China); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China)

    2012-02-06

    Highlights: Black-Right-Pointing-Pointer We propose a simple method to calculate the coupling of singlet-to-singlet and triplet-to-triplet energy transfer. Black-Right-Pointing-Pointer Coulomb term are the major contribution to the coupling of singlet-to-singlet energy transfer. Black-Right-Pointing-Pointer Effect from the intermolecular charge-transfer states dorminates in triplet-to-triplet energy transfer. Black-Right-Pointing-Pointer This method can be expanded by including correlated wavefunctions. - Abstract: A simplified approach for computing the electronic coupling of nonradiative excitation-energy transfer is proposed by following Scholes et al.'s construction on the initial and final states [G.D. Scholes, R.D. Harcourt, K.P. Ghiggino, J. Chem. Phys. 102 (1995) 9574]. The simplification is realized through defining a set of orthogonalized localized MOs, which include the polarization effect of the charge densities. The method allows calculating the coupling of both the singlet-to-singlet and triplet-to-triplet energy transfer. Numerical tests are performed for a few of dimers with different intermolecular orientations, and the results demonstrate that Coulomb term are the major contribution to the coupling of singlet-to-singlet energy transfer whereas in the case of triplet-to-triplet energy transfer, the dominant effect is arisen from the intermolecular charge-transfer states. The present application is on the Hartree-Fock level. However, the correlated wavefunctions which are normally expanded in terms of the determinant wavefunctions can be employed in the similar way.

  20. Simplified Analytic Approach of Pole-to-Pole Faults in MMC-HVDC for AC System Backup Protection Setting Calculation

    Directory of Open Access Journals (Sweden)

    Tongkun Lan

    2018-01-01

    Full Text Available AC (alternating current system backup protection setting calculation is an important basis for ensuring the safe operation of power grids. With the increasing integration of modular multilevel converter based high voltage direct current (MMC-HVDC into power grids, it has been a big challenge for the AC system backup protection setting calculation, as the MMC-HVDC lacks the fault self-clearance capability under pole-to-pole faults. This paper focused on the pole-to-pole faults analysis for the AC system backup protection setting calculation. The principles of pole-to-pole faults analysis were discussed first according to the standard of the AC system protection setting calculation. Then, the influence of fault resistance on the fault process was investigated. A simplified analytic approach of pole-to-pole faults in MMC-HVDC for the AC system backup protection setting calculation was proposed. In the proposed approach, the derived expressions of fundamental frequency current are applicable under arbitrary fault resistance. The accuracy of the proposed approach was demonstrated by PSCAD/EMTDC (Power Systems Computer-Aided Design/Electromagnetic Transients including DC simulations.

  1. Simplified Data Envelopment Analysis: What Country Won the Olympics, and How about our CO2 Emissions?

    Directory of Open Access Journals (Sweden)

    Alexander Vaninsky

    2013-07-01

    Full Text Available This paper introduces a simplified version of Data Envelopment Analysis - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. Examples of DEA applications discussed in this paper include the London 2012 Olympic Games and the dynamics of the United States’ environmental performance. In the first example, we find a team winner and rank the teams; in the second, we analyze the dynamics of CO2 emissions adjusted to the gross domestic product, population, and energy consumption. Adding a virtual Perfect Object – one having the greatest outputs and smallest inputs - we greatly simplify the DEA computational procedure by eliminating the Linear Programming algorithm. Simplicity of computations makes the suggested approach attractive for educational purposes, in particular, for use in Quantitative Reasoning courses.

  2. A Computer Program for Simplifying Incompletely Specified Sequential Machines Using the Paull and Unger Technique

    Science.gov (United States)

    Ebersole, M. M.; Lecoq, P. E.

    1968-01-01

    This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.

  3. Simplifying the Development, Use and Sustainability of HPC Software

    Directory of Open Access Journals (Sweden)

    Jeremy Cohen

    2014-07-01

    Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

  4. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  5. Simplified design of filter circuits

    CERN Document Server

    Lenk, John

    1999-01-01

    Simplified Design of Filter Circuits, the eighth book in this popular series, is a step-by-step guide to designing filters using off-the-shelf ICs. The book starts with the basic operating principles of filters and common applications, then moves on to describe how to design circuits by using and modifying chips available on the market today. Lenk's emphasis is on practical, simplified approaches to solving design problems.Contains practical designs using off-the-shelf ICsStraightforward, no-nonsense approachHighly illustrated with manufacturer's data sheets

  6. Simplified predictive models for CO2 sequestration performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [Battelle Memorial Inst., Columbus, OH (United States); Ganesh, Priya [Battelle Memorial Inst., Columbus, OH (United States); Schuetter, Jared [Battelle Memorial Inst., Columbus, OH (United States); He, Jincong [Battelle Memorial Inst., Columbus, OH (United States); Jin, Zhaoyang [Battelle Memorial Inst., Columbus, OH (United States); Durlofsky, Louis J. [Battelle Memorial Inst., Columbus, OH (United States)

    2015-09-30

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessment modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin

  7. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    Science.gov (United States)

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  8. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    Energy Technology Data Exchange (ETDEWEB)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico]. E-mails: fsrlessa@gmail.com; gmplatt@iprj.uerj.br; halves@iprj.uerj.br

    2007-07-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  9. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    International Nuclear Information System (INIS)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes

    2007-01-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  10. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  11. TH-AB-201-10: Portal Dosimetry with Elekta IViewDose:Performance of the Simplified Commissioning Approach Versus Full Commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Kydonieos, M; Folgueras, A; Florescu, L; Cybulski, T; Marinos, N; Thompson, G; Sayeed, A [Elekta Limited, Crawley, West Sussex (United Kingdom); Rozendaal, R; Olaciregui-Ruiz, I [Netherlands Cancer Institute - Antoni van Leeuwenhoek, Amsterdam, Noord-Holland (Netherlands); Subiel, A; Patallo, I Silvestre [National Physical Laboratory, London (United Kingdom)

    2016-06-15

    Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems, Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning

  12. Simplified Stability Criteria for Delayed Neutral Systems

    Directory of Open Access Journals (Sweden)

    Xinghua Zhang

    2014-01-01

    Full Text Available For a class of linear time-invariant neutral systems with neutral and discrete constant delays, several existing asymptotic stability criteria in the form of linear matrix inequalities (LMIs are simplified by using matrix analysis techniques. Compared with the original stability criteria, the simplified ones include fewer LMI variables, which can obviously reduce computational complexity. Simultaneously, it is theoretically shown that the simplified stability criteria and original ones are equivalent; that is, they have the same conservativeness. Finally, a numerical example is employed to verify the theoretic results investigated in this paper.

  13. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  14. SIMPLIFIED MATHEMATICAL MODEL OF SMALL SIZED UNMANNED AIRCRAFT VEHICLE LAYOUT

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available Strong reduction of new aircraft design period using new technology based on artificial intelligence is the key problem mentioned in forecasts of leading aerospace industry research centers. This article covers the approach to devel- opment of quick aerodynamic design methods based on artificial intelligence neural system. The problem is being solved for the classical scheme of small sized unmanned aircraft vehicle (UAV. The principal parts of the method are the mathe- matical model of layout, layout generator of this type of aircraft is built on aircraft neural networks, automatic selection module for cleaning variety of layouts generated in automatic mode, robust direct computational fluid dynamics method, aerodynamic characteristics approximators on artificial neural networks.Methods based on artificial neural networks have intermediate position between computational fluid dynamics methods or experiments and simplified engineering approaches. The use of ANN for estimating aerodynamic characteris-tics put limitations on input data. For this task the layout must be presented as a vector with dimension not exceeding sev-eral hundred. Vector components must include all main parameters conventionally used for layouts description and com- pletely replicate the most important aerodynamics and structural properties.The first stage of the work is presented in the paper. Simplified mathematical model of small sized UAV was developed. To estimate the range of geometrical parameters of layouts the review of existing vehicle was done. The result of the work is the algorithm and computer software for generating the layouts based on ANN technolo-gy. 10000 samples were generated and the dataset containig geometrical and aerodynamic characteristics of layoutwas created.

  15. Simplified approach to dynamic process modelling. Annex 4

    International Nuclear Information System (INIS)

    Danilytchev, A.; Elistratov, D.; Stogov, V.

    2010-01-01

    This document presents the OKBM contribution to the analysis of a benchmark of BN-600 reactor hybrid core with simultaneous loading of uranium fuel and MOX within the framework of the international IAEA Co-ordinated Research Project 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. In accordance with Action 12 defined during the second RCM, the simplified transient analysis was carried out on the basis of the reactivity coefficients sets, presented by all CRP participants. Purpose of present comparison is the evaluation of spread in the basic transient parameters in connection with spread in the used reactivity coefficients. A ULOF accident initial stage on the simplified model was calculated by using the SAS4A code

  16. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    Science.gov (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  17. The large break LOCA evaluation method with the simplified statistic approach

    International Nuclear Information System (INIS)

    Kamata, Shinya; Kubo, Kazuo

    2004-01-01

    USNRC published the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology to large break LOCA which supported the revised rule for Emergency Core Cooling System performance in 1989. In USNRC regulatory guide 1.157, it is required that the peak cladding temperature (PCT) cannot exceed 2200deg F with high probability 95th percentile. In recent years, overseas countries have developed statistical methodology and best estimate code with the model which can provide more realistic simulation for the phenomena based on the CSAU evaluation methodology. In order to calculate PCT probability distribution by Monte Carlo trials, there are approaches such as the response surface technique using polynomials, the order statistics method, etc. For the purpose of performing rational statistic analysis, Mitsubishi Heavy Industries, LTD (MHI) tried to develop the statistic LOCA method using the best estimate LOCA code MCOBRA/TRAC and the simplified code HOTSPOT. HOTSPOT is a Monte Carlo heat conduction solver to evaluate the uncertainties of the significant fuel parameters at the PCT positions of the hot rod. The direct uncertainty sensitivity studies can be performed without the response surface because the Monte Carlo simulation for key parameters can be performed in short time using HOTSPOT. With regard to the parameter uncertainties, MHI established the treatment that the bounding conditions are given for LOCA boundary and plant initial conditions, the Monte Carlo simulation using HOTSPOT is applied to the significant fuel parameters. The paper describes the large break LOCA evaluation method with the simplified statistic approach and the results of the application of the method to the representative four-loop nuclear power plant. (author)

  18. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    Science.gov (United States)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  19. A simplified approach for slope stability analysis of uncontrolled waste dumps.

    Science.gov (United States)

    Turer, Dilek; Turer, Ahmet

    2011-02-01

    Slope stability analysis of municipal solid waste has always been problematic because of the heterogeneous nature of the waste materials. The requirement for large testing equipment in order to obtain representative samples has identified the need for simplified approaches to obtain the unit weight and shear strength parameters of the waste. In the present study, two of the most recently published approaches for determining the unit weight and shear strength parameters of the waste have been incorporated into a slope stability analysis using the Bishop method to prepare slope stability charts. The slope stability charts were prepared for uncontrolled waste dumps having no liner and leachate collection systems with pore pressure ratios of 0, 0.1, 0.2, 0.3, 0.4 and 0.5, considering the most critical slip surface passing through the toe of the slope. As the proposed slope stability charts were prepared by considering the change in unit weight as a function of height, they reflect field conditions better than accepting a constant unit weight approach in the stability analysis. They also streamline the selection of slope or height as a function of the desired factor of safety.

  20. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    Science.gov (United States)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that

  1. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    Science.gov (United States)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  2. A simplified approach to the PROMETHEE method for priority setting in management of mine action projects

    Directory of Open Access Journals (Sweden)

    Marko Mladineo

    2016-12-01

    Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.

  3. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  4. A simplified model for calculating early offsite consequences from nuclear reactor accidents

    International Nuclear Information System (INIS)

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1988-07-01

    A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs

  5. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    Science.gov (United States)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of

  6. Prediction of the heat gain of external walls: An innovative approach for full-featured excitations based on the simplified method of Mackey-and-Wright

    International Nuclear Information System (INIS)

    Ruivo, C.R.; Vaz, D.C.

    2015-01-01

    Highlights: • The transient thermal behaviour of external multilayer walls of buildings is studied. • Reference results for four representative walls, obtained with a numerical model, are provided. • Shortcomings of approaches based on the Mackey-and-Wright method are identified. • Handling full-feature excitations with Fourier series decomposition improves accuracy. • A simpler, yet accurate, promising novel approach to predict heat gain is proposed. - Abstract: Nowadays, simulation tools are available for calculating the thermal loads of multiple rooms of buildings, for given inputs. However, due to inaccuracies or uncertainties in some of the input data (e.g., thermal properties, air infiltrations flow rates, building occupancy), the evaluated thermal load may represent no more than just an estimate of the actual thermal load of the spaces. Accordingly, in certain practical situations, simplified methods may offer a more reasonable trade-off between effort and results accuracy than advanced software. Hence, despite the advances in computing power over the last decades, simplified methods for the evaluation of thermal loads are still of great interest nowadays, for both the practicing engineer and the graduating student, since these can be readily implemented or developed in common computational-tools, like a spreadsheet. The method of Mackey and Wright (M&W) is a simplified method that upon values of the decrement factor and time lag of a wall (or roof) estimates the instantaneous rate of heat transfer through its indoor surface. It assumes cyclic behaviour and shows good accuracy when the excitation and response have matching shapes, but it involves non negligible error otherwise, for example, in the case of walls of high thermal inertia. The aim of this study is to develop a simplified procedure that considerably improves the accuracy of the M&W method, particularly for excitations that noticeably depart from the sinusoidal shape, while not

  7. The Trapping Index: How to integrate the Eulerian and the Lagrangian approach for the computation of the transport time scales of semi-enclosed basins.

    Science.gov (United States)

    Cucco, Andrea; Umgiesser, Georg

    2015-09-15

    In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Simplified methods to assess thermal fatigue due to turbulent mixing

    International Nuclear Information System (INIS)

    Hannink, M.H.C.; Timperi, A.

    2011-01-01

    Thermal fatigue is a safety relevant damage mechanism in pipework of nuclear power plants. A well-known simplified method for the assessment of thermal fatigue due to turbulent mixing is the so-called sinusoidal method. Temperature fluctuations in the fluid are described by a sinusoidally varying signal at the inner wall of the pipe. Because of limited information on the thermal loading conditions, this approach generally leads to overconservative results. In this paper, a new assessment method is presented, which has the potential of reducing the overconservatism of existing procedures. Artificial fluid temperature signals are generated by superposition of harmonic components with different amplitudes and frequencies. The amplitude-frequency spectrum of the components is modelled by a formula obtained from turbulence theory, whereas the phase differences are assumed to be randomly distributed. Lifetime predictions generated with the new simplified method are compared with lifetime predictions based on real fluid temperature signals, measured in an experimental setup of a mixing tee. Also, preliminary steady-state Computational Fluid Dynamics (CFD) calculations of the total power of the fluctuations are presented. The total power is needed as an input parameter for the spectrum formula in a real-life application. Solution of the transport equation for the total power was included in a CFD code and comparisons with experiments were made. The newly developed simplified method for generating the temperature signal is shown to be adequate for the investigated geometry and flow conditions, and demonstrates possibilities of reducing the conservatism of the sinusoidal method. CFD calculations of the total power show promising results, but further work is needed to develop the approach. (author)

  9. On a nonlinear Kalman filter with simplified divided difference approximation

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim; Moroz, Irene M.

    2012-01-01

    We present a new ensemble-based approach that handles nonlinearity based on a simplified divided difference approximation through Stirling's interpolation formula, which is hence called the simplified divided difference filter (sDDF). The sDDF uses Stirling's interpolation formula to evaluate the statistics of the background ensemble during the prediction step, while at the filtering step the sDDF employs the formulae in an ensemble square root filter (EnSRF) to update the background to the analysis. In this sense, the sDDF is a hybrid of Stirling's interpolation formula and the EnSRF method, while the computational cost of the sDDF is less than that of the EnSRF. Numerical comparison between the sDDF and the EnSRF, with the ensemble transform Kalman filter (ETKF) as the representative, is conducted. The experiment results suggest that the sDDF outperforms the ETKF with a relatively large ensemble size, and thus is a good candidate for data assimilation in systems with moderate dimensions. © 2011 Elsevier B.V. All rights reserved.

  10. On a nonlinear Kalman filter with simplified divided difference approximation

    KAUST Repository

    Luo, Xiaodong

    2012-03-01

    We present a new ensemble-based approach that handles nonlinearity based on a simplified divided difference approximation through Stirling\\'s interpolation formula, which is hence called the simplified divided difference filter (sDDF). The sDDF uses Stirling\\'s interpolation formula to evaluate the statistics of the background ensemble during the prediction step, while at the filtering step the sDDF employs the formulae in an ensemble square root filter (EnSRF) to update the background to the analysis. In this sense, the sDDF is a hybrid of Stirling\\'s interpolation formula and the EnSRF method, while the computational cost of the sDDF is less than that of the EnSRF. Numerical comparison between the sDDF and the EnSRF, with the ensemble transform Kalman filter (ETKF) as the representative, is conducted. The experiment results suggest that the sDDF outperforms the ETKF with a relatively large ensemble size, and thus is a good candidate for data assimilation in systems with moderate dimensions. © 2011 Elsevier B.V. All rights reserved.

  11. Simplified alternative to orthogonal field overlap when irradiating a tracheostomy stoma or the hypopharynx

    International Nuclear Information System (INIS)

    Pezner, R.D.; Findley, D.O.

    1981-01-01

    Orthogonal field arrangements are usually employed to irradiate a tumor volume which includes a tracheostomy stoma or the hypopharynx. This approach may produce a significantly greater dose than intended to a small segment of the cervical spinal cord because of field overlap at depth from divergence of the beams. Various sophisticated approaches have been proposed to compensate for this overlap. All require marked precision in reproducing the fields on a daily basis. We propose a simplified approach of initially irradiating the entire treatment volume by anterior and posterior opposed fields. Opposed lateral fields that exclude the spinal cord would then provide local boost treatment. A case example and computer-generated isodose curves are presented

  12. A simplified computational scheme for thermal analysis of LWR spent fuel dry storage and transportation casks

    International Nuclear Information System (INIS)

    Kim, Chang Hyun

    1997-02-01

    A simplified computational scheme for thermal analysis of the LWR spent fuel dry storage and transportation casks has been developed using two-step thermal analysis method incorporating effective thermal conductivity model for the homogenized spent fuel assembly. Although a lot of computer codes and analytical models have been developed for application to the fields of thermal analysis of dry storage and/or transportation casks, some difficulties in its analysis arise from the complexity of the geometry including the rod bundles of spent fuel and the heat transfer phenomena in the cavity of cask. Particularly, if the disk-type structures such as fuel baskets and aluminium heat transfer fins are included, the thermal analysis problems in the cavity are very complex. To overcome these difficulties, cylindrical coordinate system is adopted to calculate the temperature profile of a cylindrical cask body using the multiple cylinder model as the step-1 analysis of the present study. In the step-2 analysis, Cartesian coordinate system is adopted to calculate the temperature distributions of the disk-type structures such as fuel basket and aluminium heat transfer fin using three- dimensional conduction analysis model. The effective thermal conductivity for homogenized spent fuel assembly based on Manteufel and Todreas model is incorporated in step-2 analysis to predict the maximum fuel temperature. The presented two-step computational scheme has been performed using an existing HEATING 7.2 code and the effective thermal conductivity for the homogenized spent fuel assembly has been calculated by additional numerical analyses. Sample analyses of five cases are performed for NAC-STC including normal transportation condition to examine the applicability of the presented simplified computational scheme for thermal analysis of the large LWR spent fuel dry storage and transportation casks and heat transfer characteristics in the cavity of the cask with the disk-type structures

  13. Windows 8 simplified

    CERN Document Server

    McFedries, Paul

    2012-01-01

    The easiest way for visual learners to get started with Windows 8 The popular Simplified series makes visual learning easier than ever, and with more than 360,000 copies sold, previous Windows editions are among the bestselling Visual books. This guide goes straight to the point with easy-to-follow, two-page tutorials for each task. With full-color screen shots and step-by-step directions, it gets beginners up and running on the newest version of Windows right away. Learn to work with the new interface and improved Internet Explorer, manage files, share your computer, and much more. Perfect fo

  14. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  15. A comprehensive approach to dark matter studies: exploration of simplified top-philic models

    Energy Technology Data Exchange (ETDEWEB)

    Arina, Chiara; Backović, Mihailo [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Conte, Eric [Groupe de Recherche de Physique des Hautes Énergies (GRPHE), Université de Haute-Alsace,IUT Colmar, F-68008 Colmar Cedex (France); Fuks, Benjamin [Sorbonne Universités, UPMC University Paris 06, UMR 7589, LPTHE, F-75005, Paris (France); CNRS, UMR 7589, LPTHE, F-75005, Paris (France); Guo, Jun [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,Chinese Academy of Sciences, Beijing 100190 (China); Institut Pluridisciplinaire Hubert Curien/Département Recherches Subatomiques,Université de Strasbourg/CNRS-IN2P3, F-67037 Strasbourg (France); Heisig, Jan [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, D-52056 Aachen (Germany); Hespel, Benoît [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Krämer, Michael [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, D-52056 Aachen (Germany); Maltoni, Fabio; Martini, Antony [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Mawatari, Kentarou [Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes,CNRS/IN2P3, 53 Avenue des Martyrs, F-38026 Grenoble (France); Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel andInternational Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Pellen, Mathieu [Universität Würzburg, Institut für Theoretische Physik und Astrophysik,Emil-Hilb-Weg 22, 97074 Würzburg (Germany); Vryonidou, Eleni [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium)

    2016-11-21

    Studies of dark matter lie at the interface of collider physics, astrophysics and cosmology. Constraining models featuring dark matter candidates entails the capability to provide accurate predictions for large sets of observables and compare them to a wide spectrum of data. We present a framework which, starting from a model Lagrangian, allows one to consistently and systematically make predictions, as well as to confront those predictions with a multitude of experimental results. As an application, we consider a class of simplified dark matter models where a scalar mediator couples only to the top quark and a fermionic dark sector (i.e. the simplified top-philic dark matter model). We study in detail the complementarity of relic density, direct/indirect detection and collider searches in constraining the multi-dimensional model parameter space, and efficiently identify regions where individual approaches to dark matter detection provide the most stringent bounds. In the context of collider studies of dark matter, we point out the complementarity of LHC searches in probing different regions of the model parameter space with final states involving top quarks, photons, jets and/or missing energy. Our study of dark matter production at the LHC goes beyond the tree-level approximation and we show examples of how higher-order corrections to dark matter production processes can affect the interpretation of the experimental results.

  16. Communication: A simplified coupled-cluster Lagrangian for polarizable embedding.

    Science.gov (United States)

    Krause, Katharina; Klopper, Wim

    2016-01-28

    A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian.

  17. Communication: A simplified coupled-cluster Lagrangian for polarizable embedding

    International Nuclear Information System (INIS)

    Krause, Katharina; Klopper, Wim

    2016-01-01

    A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian

  18. MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACH

    OpenAIRE

    Omar AlSheikSalem

    2016-01-01

    In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating healthcare applications and services is one of the vast data approaches that can be adapted to mobile cloud computing. This work proposes a framework of a global healthcare computing based combining both mobile computing and cloud computing. This approach leads to integrate all of ...

  19. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    Directory of Open Access Journals (Sweden)

    Dimitrov BD

    2015-04-01

    Full Text Available Borislav D Dimitrov,1,2 Nicola Motterlini,2,† Tom Fahey2 1Academic Unit of Primary Care and Population Sciences, University of Southampton, Southampton, United Kingdom; 2HRB Centre for Primary Care Research, Department of General Medicine, Division of Population Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland †Nicola Motterlini passed away on November 11, 2012 Objective: Estimating calibration performance of clinical prediction rules (CPRs in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a ABCD2 rule for prediction of 7 day stroke; and b CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”. As confirmation, a logistic regression model (with derivation study coefficients was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs, 95% confidence intervals (CIs, and indexes of heterogeneity (I2 on forest plots (fixed and random effects models, with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results: Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points, indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82, however, calibration in some studies was low. In such cases with miscalibration, the under

  20. Theoretical assessment of a proposal for the simplified determination of critical loads of elastic shells

    International Nuclear Information System (INIS)

    Malmberg, T.

    1986-08-01

    Within the context of the stability analysis of the cryostat of a fusion reactor the question was raised whether or not the rather lengthy conventional stability analysis can be circumvented by applying a simplified strategy based on common linear Finite Element computer programs. This strategy involves the static linear deformation analysis of the structure with and without imperfections. For some simple stability problems this approach has been shown to be successful. The purpose of this study is to derive a general proof of the validity of this approach for thin shells with arbitrary geometry under hydrostatic pressure or dead loading along the boundary. This general assessment involves two types of analyses: 1) A general stability analysis for thin shells; this is based on a simple nonlinear shell theory and a stability criterion in form of the neutral (indifferent) equilibrium condition. This result is taken as reference solution. 2) A general linear deformation analysis for thin imperfect shells and the definition of a suitable scalar parameter (β-parameter) which should represent the reciprocal of the critical load factor. It is shown that the simplified strategy (=β-parameter approach'') generally is not capable to predict the actual critical load factor irrespective whether there is a hydrostatic pressure loading or dead loading along the edge of the shell. This general result is in contrast to the observations made for some simple stability problems. Nevertheless, the results of this study do not exclude the possibility that the simplified strategy will give reasonable approximate solutions at least for a restricted class of stability problems. (orig./HP) [de

  1. On matrix-model approach to simplified Khovanov-Rozansky calculus

    Science.gov (United States)

    Morozov, A.; Morozov, And.; Popolitov, A.

    2015-10-01

    Wilson-loop averages in Chern-Simons theory (HOMFLY polynomials) can be evaluated in different ways - the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension) of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q = 1 the problem is reformulated in terms of fat (ribbon) graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q ≠ 1 most of these relations are broken (i.e. deformed in a still uncontrollable way), and only few are protected by Reidemeister invariance of Chern-Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections - far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.

  2. A convolutional approach to reflection symmetry

    DEFF Research Database (Denmark)

    Cicconet, Marcelo; Birodkar, Vighnesh; Lund, Mads

    2017-01-01

    We present a convolutional approach to reflection symmetry detection in 2D. Our model, built on the products of complex-valued wavelet convolutions, simplifies previous edge-based pairwise methods. Being parameter-centered, as opposed to feature-centered, it has certain computational advantages w...

  3. A simplified modelling approach for quantifying tillage effects on soil carbon stocks

    DEFF Research Database (Denmark)

    Chatskikh, Dmitri; Hansen, Søren; Olesen, Jørgen E.

    2009-01-01

    Soil tillage has been shown to affect long-term changes in soil organic carbon (SOC) content in a number of field experiments. This paper presents a simplified approach for including effects of tillage in models of soil C turnover in the tilled-soil layer. We used an existing soil organic matter...... (SOM) model (CN-SIM) with standard SOC data for a homogeneous tilled layer from four long-term field experiments with conventionally tilled (CT) and no-till (NT) treatments. The SOM model was tested on data from long-term (>10 years) field trials differing in climatic conditions, soil properties......, residue management and crop rotations in Australia, Brazil, the USA and Switzerland. The C input for the treatments was estimated using data on crop rotation and residue management. The SOM model was applied for both CT and NT trials without recalibration, but incorporated a 'tillage factor' (TF) to scale...

  4. A simplified CT-guided approach for greater occipital nerve infiltration in the management of occipital neuralgia.

    Science.gov (United States)

    Kastler, Adrian; Onana, Yannick; Comte, Alexandre; Attyé, Arnaud; Lajoie, Jean-Louis; Kastler, Bruno

    2015-08-01

    To evaluate the efficacy of a simplified CT-guided greater occipital nerve (GON) infiltration approach in the management of occipital neuralgia (ON). Local IRB approval was obtained and written informed consent was waived. Thirty three patients suffering from severe refractory ON who underwent a total of 37 CT-guided GON infiltrations were included between 2012 and 2014. GON infiltration was performed at the first bend of the GON, between the inferior obliqus capitis and semispinalis capitis muscles with local anaesthetics and cortivazol. Pain was evaluated via VAS scores. Clinical success was defined by pain relief greater than or equal to 50 % lasting for at least 3 months. The pre-procedure mean pain score was 8/10. Patients suffered from left GON neuralgia in 13 cases, right GON neuralgia in 16 cases and bilateral GON neuralgia in 4 cases. The clinical success rate was 86 %. In case of clinical success, the mean pain relief duration following the procedure was 9.16 months. Simplified CT-guided infiltration appears to be effective in managing refractory ON. With this technique, infiltration of the GON appears to be faster, technically easier and, therefore, safer compared with other previously described techniques. • Occipital neuralgia is a very painful and debilitating condition • GON infiltrations have been successful in the treatment of occipital neuralgia • This simplified technique presents a high efficacy rate with long-lasting pain relief • This infiltration technique does not require contrast media injection for pre-planning • GON infiltration at the first bend appears easier and safer.

  5. Simplified Laboratory Runoff Procedure (SLRP): Procedure and Application

    National Research Council Canada - National Science Library

    Price, Richard

    2000-01-01

    The Simplified Laboratory Runoff Procedure (SLRP) was developed to provide a faster, less expensive approach to evaluate surface runoff water quality from dredged material placed in an upland environment...

  6. Simplified model of a PWR primary circuit

    International Nuclear Information System (INIS)

    Souza, A.L.; Faya, A.J.G.

    1988-07-01

    The computer program RENUR was developed to perform a very simplified simulation of a typical PWR primary circuit. The program has mathematical models for the thermal-hydraulics of the reactor core and the pressurizer, the rest of the circuit being treated as a single volume. Heat conduction in the fuel rod is analyzed by a nodal model. Average and hot channels are treated so that bulk response of the core and DNBR can be evaluated. A homogenenous model is employed in the pressurizer. Results are presented for a steady-state situation as well as for a loss of load transient. Agreement with the results of more elaborate computer codes is good with substantial reduction in computer costs. (author) [pt

  7. What is computation : An epistemic approach

    NARCIS (Netherlands)

    Wiedermann, Jiří; van Leeuwen, Jan

    2015-01-01

    Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too

  8. A simplified dynamic analysis for reactor piping systems under blowdown conditions

    International Nuclear Information System (INIS)

    Chen, M.M.

    1975-01-01

    In the design of pipelines in a nuclear power plant for blowdown conditions, is it customary to conduct dynamic analysis of the piping system to obtain the responses and the resulting stresses. Calculations are repeated for each design modification in piping geometry or supporting system until the design codes are met. The numerical calculations are, in general, very costly and time consuming. Until now, there have been no simple means for calculating the dynamic responses for the design. The proposed method reduces the dynamic calculation to a quasi-static one, and can be beneficially used for the preliminary design. The method is followed by a complete dynamical analysis to improve the final results. The new formulations greatly simplify the numerical computation and provide design guides. When used to design a given piping system, the method saved approximately one order of magnitude of computer time. The approach can also be used for other types of structures

  9. Studies and research concerning BNFP. Identification and simplified modeling of economically important radwaste variables

    International Nuclear Information System (INIS)

    Ebel, P.E.; Godfrey, W.L.; Henry, J.L.; Postles, R.L.

    1983-09-01

    An extensive computer model describing the mass balance and economic characteristics of radioactive waste disposal systems was exercised in a series of runs designed using linear statistical methods. The most economically important variables were identified, their behavior characterized, and a simplified computer model prepared which runs on desk-top minicomputers. This simplified model allows the investigation of the effects of the seven most significant variables in each of four waste areas: Liquid Waste Storage, Liquid Waste Solidification, General Process Trash Handling, and Hulls Handling. 8 references, 1 figure, 12 tables

  10. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    Science.gov (United States)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  11. On matrix-model approach to simplified Khovanov–Rozansky calculus

    Directory of Open Access Journals (Sweden)

    A. Morozov

    2015-10-01

    Full Text Available Wilson-loop averages in Chern–Simons theory (HOMFLY polynomials can be evaluated in different ways – the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q=1 the problem is reformulated in terms of fat (ribbon graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q≠1 most of these relations are broken (i.e. deformed in a still uncontrollable way, and only few are protected by Reidemeister invariance of Chern–Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections – far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.

  12. Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography

    International Nuclear Information System (INIS)

    Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka

    1989-01-01

    Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)

  13. The Computational Properties of a Simplified Cortical Column Model.

    Science.gov (United States)

    Cain, Nicholas; Iyer, Ramakrishnan; Koch, Christof; Mihalas, Stefan

    2016-09-01

    The mammalian neocortex has a repetitious, laminar structure and performs functions integral to higher cognitive processes, including sensory perception, memory, and coordinated motor output. What computations does this circuitry subserve that link these unique structural elements to their function? Potjans and Diesmann (2014) parameterized a four-layer, two cell type (i.e. excitatory and inhibitory) model of a cortical column with homogeneous populations and cell type dependent connection probabilities. We implement a version of their model using a displacement integro-partial differential equation (DiPDE) population density model. This approach, exact in the limit of large homogeneous populations, provides a fast numerical method to solve equations describing the full probability density distribution of neuronal membrane potentials. It lends itself to quickly analyzing the mean response properties of population-scale firing rate dynamics. We use this strategy to examine the input-output relationship of the Potjans and Diesmann cortical column model to understand its computational properties. When inputs are constrained to jointly and equally target excitatory and inhibitory neurons, we find a large linear regime where the effect of a multi-layer input signal can be reduced to a linear combination of component signals. One of these, a simple subtractive operation, can act as an error signal passed between hierarchical processing stages.

  14. Simplified Metrics Calculation for Soft Bit Detection in DVB-T2

    Directory of Open Access Journals (Sweden)

    D. Perez-Calderon

    2014-04-01

    Full Text Available The constellation rotation and cyclic quadrature component delay (RQD technique has been adopted in the second generation terrestrial digital video broadcasting (DVB-T2 standard. It improves the system performance under severe propagation conditions, but introduces serious complexity problems in the hardware implementation of the detection process. In this paper, we present a simplified scheme that greatly reduces the complexity of the demapper by simplifying the soft bit metrics computation having a negligible overall system performance loss.

  15. Indirect detection constraints on s- and t-channel simplified models of dark matter

    Science.gov (United States)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  16. Cognitive Approaches for Medicine in Cloud Computing.

    Science.gov (United States)

    Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia

    2018-03-03

    This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.

  17. Office 2013 simplified

    CERN Document Server

    Marmel, Elaine

    2013-01-01

    A basic introduction to learn Office 2013 quickly, easily, and in full color Office 2013 has new features and tools to master, and whether you're upgrading from an earlier version or using the Office applications for the first time, you'll appreciate this simplified approach. Offering a clear, visual style of learning, this book provides you with concise, step-by-step instructions and full-color screen shots that walk you through the applications in the Microsoft Office 2013 suite: Word, Excel, PowerPoint, Outlook, and Publisher.Shows you how to tackle dozens of Office 2013

  18. Feasibility of implementation of a "simplified, No-X-Ray, no-lead apron, two-catheter approach" for ablation of supraventricular arrhythmias in children and adults.

    Science.gov (United States)

    Stec, Sebastian; Śledź, Janusz; Mazij, Mariusz; Raś, Małgorzata; Ludwik, Bartosz; Chrabąszcz, Michał; Śledź, Arkadiusz; Banasik, Małgorzata; Bzymek, Magdalena; Młynarczyk, Krzysztof; Deutsch, Karol; Labus, Michał; Śpikowski, Jerzy; Szydłowski, Lesław

    2014-08-01

    Although the "near-zero-X-Ray" or "No-X-Ray" catheter ablation (CA) approach has been reported for treatment of various arrhythmias, few prospective studies have strictly used "No-X-Ray," simplified 2-catheter approaches for CA in patients with supraventricular tachycardia (SVT). We assessed the feasibility of a minimally invasive, nonfluoroscopic (MINI) CA approach in such patients. Data were obtained from a prospective multicenter CA registry of patients with regular SVTs. After femoral access, 2 catheters were used to create simple, 3D electroanatomic maps and to perform electrophysiologic studies. Medical staff did not use lead aprons after the first 10 MINI CA cases. A total of 188 patients (age, 45 ± 21 years; 17% 0.05), major complications (0% vs. 0%, P > 0.05) and acute (98% vs. 98%, P > 0.05) and long-term (93% vs. 94%, P > 0.05) success rates were similar in the "No-X-Ray" and control groups. Implementation of a strict "No-X-Ray, simplified 2-catheter" CA approach is safe and effective in majority of the patients with SVT. This modified approach for SVTs should be prospectively validated in a multicenter study. © 2014 Wiley Periodicals, Inc.

  19. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  20. Computer modelling for ecosystem service assessment: Chapter 4.4

    Science.gov (United States)

    Dunford, Robert; Harrison, Paula; Bagstad, Kenneth J.

    2017-01-01

    Computer models are simplified representations of the environment that allow biophysical, ecological, and/or socio-economic characteristics to be quantified and explored. Modelling approaches differ from mapping approaches (Chapter 5) as (i) they are not forcibly spatial (although many models do produce spatial outputs); (ii) they focus on understanding and quantifying the interactions between different components of social and/or environmental systems and (iii)

  1. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  2. Update and Improve Subsection NH - Alternative Simplified Creep-Fatigue Design Methods

    International Nuclear Information System (INIS)

    Asayama, Tai

    2009-01-01

    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  3. Simplified distributed parameters BWR dynamic model for transient and stability analysis

    International Nuclear Information System (INIS)

    Espinosa-Paredes, Gilberto; Nunez-Carrera, Alejandro; Vazquez-Rodriguez, Alejandro

    2006-01-01

    This paper describes a simplified model to perform transient and linear stability analysis for a typical boiling water reactor (BWR). The simplified transient model was based in lumped and distributed parameters approximations, which includes vessel dome and the downcomer, recirculation loops, neutron process, fuel pin temperature distribution, lower and upper plenums reactor core and pressure and level controls. The stability was determined by studying the linearized versions of the equations representing the BWR system in the frequency domain. Numerical examples are used to illustrate the wide application of the simplified BWR model. We concluded that this simplified model describes properly the dynamic of a BWR and can be used for safety analysis or as a first approach in the design of an advanced BWR

  4. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    Science.gov (United States)

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Radiodine treatment of hyperthyroidism with a simplified dosimetric approach. Clinical results

    International Nuclear Information System (INIS)

    Giovanella, L.; De Palma, D.; Ceriani, L.; Garancini, S.; Vanoli, P.; Tordiglione, M.; Tarolo, G. L.

    2000-01-01

    In this article is evaluated the clinical and effectiveness of a simplified dosimetric approach to the iodine-131 treatment of hyperthyroidism due to Graves' disease or uninodular and multinodular toxic goiter. 189 patients with biochemically confirmed hyperthyroidism and performed thyroid ultrasonography and scintigraphy obtaining the diagnosis of Graves' disease in 43 patients, uninodular toxic goiter in 57 patients and multinodular toxic goiter in 89 patients were enrolled in order to be examined. It was found in 28 patients cold thyroid nodules and performed fine-needle aspiration with negative cytology for thyroid malignancy in all cases. Antithyroid drugs were stopped 5 days till radioiodine administration and, if necessary, restored 15 days after the treatment. Radioiodine uptake test was performed in all patients and therapeutic activity calculated to obtain a minimal activity of 185 MBq in the thyroid 24 hours after administration. The minimal activity was adjusted based on clinical, biochemical and imaging data to obtain a maximal activity of 370 MBq after 24 hours. Biochemical and clinical tests were scheduled at 3 and 12 months posttreatment and thyroxine treatment was started when hypothyroidism occurred. In Graves' disease patients a mean activity of 370 MBq (distribution 259-555 MBq) was administered. Three months after treatment and at least 15 days after methimazole discontinuation 32 of 43 (74%) patients were hypothyroid , 5 of 43 (11%) euthyroid and 6 of 43 (15%) hyperthyroid. Three of the latter were immediately submitted to a new radioiodine administration while 32 hypothyroid patients received thyroxine treatment. One year after the radioiodine treatment no patient had hyperthyroidism; 38 of 43 (89%) were on a replacement treatment while 5 (11%) remained euthyroid. In uni-and multinodular toxic goiter a mean activity of 444 MBq (distribution 259-555 MBq) was administered. Three months posttreatment 134 of 146 (92%) patients were euthyroid and

  6. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  7. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  8. Simplified proceeding as a civil procedure model

    Directory of Open Access Journals (Sweden)

    Олексій Юрійович Зуб

    2016-01-01

    shall mean a specific, additional form of consideration and solution of civil cases that is based on the voluntary approach to its use, characterized by the reduced set of procedural rules and ends with rendering a peculiar judicial decision. Moreover, the most common features of summary proceedings are highlighted. Simplified proceedings as a specific form of consideration of dispute regarding civil law and as a special way to optimize legal proceedings is provided with a set of peculiar features that distinguish them among the other proceedings. Therewith, the analyzed features shall be defined as basic, in other words, such features peculiar to the certain kind of proceedings during its development and direct application to the civil procedural law.

  9. Computer programs simplify optical system analysis

    Science.gov (United States)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  10. A simplified quantum gravitational model of inflation

    International Nuclear Information System (INIS)

    Tsamis, N C; Woodard, R P

    2009-01-01

    Inflationary quantum gravity simplifies drastically in the leading logarithm approximation. We show that the only counterterm which contributes in this limit is the 1-loop renormalization of the cosmological constant. We go further to make a simplifying assumption about the operator dynamics at leading logarithm order. This assumption is explicitly implemented at 1- and 2-loop orders, and we describe how it can be implemented nonperturbatively. We also compute the expectation value of an invariant observable designed to quantify the quantum gravitational back-reaction on inflation. Although our dynamical assumption may not prove to be completely correct, it does have the right time dependence, it can naturally produce primordial perturbations of the right strength, and it illustrates how a rigorous application of the leading logarithm approximation might work in quantum gravity. It also serves as a partial test of the 'null hypothesis' that there are no significant effects from infrared gravitons.

  11. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  12. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  13. Simplified model of a PWR primary coolant circuit

    International Nuclear Information System (INIS)

    Souza, A.L. de; Faya, A.J.G.

    1988-01-01

    The computer program RENUR was developed to perform a very simplified simulation of a typical PWR primary circuit. The program has mathematical models for the thermal-hydraulics of the reactor core and the pressurizer, the rest of the circuit being treated as a single volume. Heat conduction in the fuel rod is analysed by a nodal model. Average and hot channels are treated so that the bulk response of the core and DNBR can be evaluated. A Homogenenous model is employed in the pressurizer. Results are presented for a steady-state situation as well as for a loss of load transient. Agreement with the results of more elaborate computer codes is good with substantial reduction in computer costs. (author) [pt

  14. Computer architecture a quantitative approach

    CERN Document Server

    Hennessy, John L

    2019-01-01

    Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.

  15. Simplified P_n transport core calculations in the Apollo3 system

    International Nuclear Information System (INIS)

    Baudron, Anne-Marie; Lautard, Jean-Jacques

    2011-01-01

    This paper describes the development of two different neutronics core solvers based on the Simplified P_N transport (SP_N) approximation developed in the context of a new generation nuclear reactor computational system, APOLLO3. Two different approaches have been used. The first one solves the standard SPN system. In the second approach, the SP_N equations are solved as diffusion equations by treating the SP_N flux harmonics like pseudo energy groups, obtained by a change of variable. These two methods have been implemented for Cartesian and hexagonal geometries in the kinetics solver MINOS. The numerical approximation is based on the mixed dual finite formulation and the discretization uses the Raviart-Thomas-Nedelec finite elements. For the unstructured geometries, the SP_N equations are treated by the SN transport solver MINARET by considering the second SP_N approach. The MINARET solver is based on discontinuous Galerkin finite element approximation on cylindrical unstructured meshes composed of a set of conforming triangles for the radial direction. Numerical applications are presented for both solvers in different core configurations (the Jules Horowitz research reactor (JHR) and the Generation IV fast reactor project ESFR). (author)

  16. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  17. Chronic Meningitis: Simplifying a Diagnostic Challenge.

    Science.gov (United States)

    Baldwin, Kelly; Whiting, Chris

    2016-03-01

    Chronic meningitis can be a diagnostic dilemma for even the most experienced clinician. Many times, the differential diagnosis is broad and encompasses autoimmune, neoplastic, and infectious etiologies. This review will focus on a general approach to chronic meningitis to simplify the diagnostic challenges many clinicians face. The article will also review the most common etiologies of chronic meningitis in some detail including clinical presentation, diagnostic testing, treatment, and outcomes. By using a case-based approach, we will focus on the key elements of clinical presentation and laboratory analysis that will yield the most rapid and accurate diagnosis in these complicated cases.

  18. Seismic analysis of long tunnels: A review of simplified and unified methods

    Directory of Open Access Journals (Sweden)

    Haitao Yu

    2017-06-01

    Full Text Available Seismic analysis of long tunnels is important for safety evaluation of the tunnel structure during earthquakes. Simplified models of long tunnels are commonly adopted in seismic design by practitioners, in which the tunnel is usually assumed as a beam supported by the ground. These models can be conveniently used to obtain the overall response of the tunnel structure subjected to seismic loading. However, simplified methods are limited due to the assumptions that need to be made to reach the solution, e.g. shield tunnels are assembled with segments and bolts to form a lining ring and such structural details may not be included in the simplified model. In most cases, the design will require a numerical method that does not have the shortcomings of the analytical solutions, as it can consider the structural details, non-linear behavior, etc. Furthermore, long tunnels have significant length and pass through different strata. All of these would require large-scale seismic analysis of long tunnels with three-dimensional models, which is difficult due to the lack of available computing power. This paper introduces two types of methods for seismic analysis of long tunnels, namely simplified and unified methods. Several models, including the mass-spring-beam model, and the beam-spring model and its analytical solution are presented as examples of the simplified method. The unified method is based on a multiscale framework for long tunnels, with coarse and refined finite element meshes, or with the discrete element method and the finite difference method to compute the overall seismic response of the tunnel while including detailed dynamic response at positions of potential damage or of interest. A bridging scale term is introduced in the framework so that compatibility of dynamic behavior between the macro- and meso-scale subdomains is enforced. Examples are presented to demonstrate the applicability of the simplified and the unified methods.

  19. Induced simplified neutrosophic correlated aggregation operators for multi-criteria group decision-making

    Science.gov (United States)

    Şahin, Rıdvan; Zhang, Hong-yu

    2018-03-01

    Induced Choquet integral is a powerful tool to deal with imprecise or uncertain nature. This study proposes a combination process of the induced Choquet integral and neutrosophic information. We first give the operational properties of simplified neutrosophic numbers (SNNs). Then, we develop some new information aggregation operators, including an induced simplified neutrosophic correlated averaging (I-SNCA) operator and an induced simplified neutrosophic correlated geometric (I-SNCG) operator. These operators not only consider the importance of elements or their ordered positions, but also take into account the interactions phenomena among decision criteria or their ordered positions under multiple decision-makers. Moreover, we present a detailed analysis of I-SNCA and I-SNCG operators, including the properties of idempotency, commutativity and monotonicity, and study the relationships among the proposed operators and existing simplified neutrosophic aggregation operators. In order to handle the multi-criteria group decision-making (MCGDM) situations where the weights of criteria and decision-makers usually correlative and the criterion values are considered as SNNs, an approach is established based on I-SNCA operator. Finally, a numerical example is presented to demonstrate the proposed approach and to verify its effectiveness and practicality.

  20. Computer aided analysis of additional chromosome aberrations in Philadelphia chromosome positive acute lymphoblastic leukaemia using a simplified computer readable cytogenetic notation

    Directory of Open Access Journals (Sweden)

    Mohr Brigitte

    2003-01-01

    Full Text Available Abstract Background The analysis of complex cytogenetic databases of distinct leukaemia entities may help to detect rare recurring chromosome aberrations, minimal common regions of gains and losses, and also hot spots of genomic rearrangements. The patterns of the karyotype alterations may provide insights into the genetic pathways of disease progression. Results We developed a simplified computer readable cytogenetic notation (SCCN by which chromosome findings are normalised at a resolution of 400 bands. Lost or gained chromosomes or chromosome segments are specified in detail, and ranges of chromosome breakpoint assignments are recorded. Software modules were written to summarise the recorded chromosome changes with regard to the respective chromosome involvement. To assess the degree of karyotype alterations the ploidy levels and numbers of numerical and structural changes were recorded separately, and summarised in a complex karyotype aberration score (CKAS. The SCCN and CKAS were used to analyse the extend and the spectrum of additional chromosome aberrations in 94 patients with Philadelphia chromosome positive (Ph-positive acute lymphoblastic leukemia (ALL and secondary chromosome anomalies. Dosage changes of chromosomal material represented 92.1% of all additional events. Recurring regions of chromosome losses were identified. Structural rearrangements affecting (pericentromeric chromosome regions were recorded in 24.6% of the cases. Conclusions SCCN and CKAS provide unifying elements between karyotypes and computer processable data formats. They proved to be useful in the investigation of additional chromosome aberrations in Ph-positive ALL, and may represent a step towards full automation of the analysis of large and complex karyotype databases.

  1. Simple design of slanted grating with simplified modal method.

    Science.gov (United States)

    Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun

    2014-02-15

    A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.

  2. Cerebrospinal fluid dynamics in a simplified model of the human ventricular system

    International Nuclear Information System (INIS)

    Ammourah, S.; Aroussi, A.; Vloeberghs, M.

    2003-01-01

    This study investigates the flow of the Cerebrospinal Fluid (CSF) inside a simplified model of the human ventricular system. Both computational and experimental results are explored. Due to the complexity of the real geometry, a simplified three-dimensional (3-D) model of the ventricular system was constructed with the same volume as the real geometry. The numerical study was conducted using the commercial computational fluid dynamics (CFD) package FLUENT-6. Different CFD cases were solved for different flow rates range between 100-500 ml/day. A scaled up to 4:1 physical model with the same geometry as the computational model, was built. A diluted dye was injected into the physical model and visualized. From the CFD studies it was found that the flow pattern of the CSF is structured and has a 3-D motion. Recirculating motion takes place in the lateral ventricles in the form of small eddies at each plane. Experimentally, the dye reverse motion noticed confirms the CFD findings about the presence of a recirculating motion. (author)

  3. Enhancing Membrane Protein Identification Using a Simplified Centrifugation and Detergent-Based Membrane Extraction Approach.

    Science.gov (United States)

    Zhou, Yanting; Gao, Jing; Zhu, Hongwen; Xu, Jingjing; He, Han; Gu, Lei; Wang, Hui; Chen, Jie; Ma, Danjun; Zhou, Hu; Zheng, Jing

    2018-02-20

    Membrane proteins may act as transporters, receptors, enzymes, and adhesion-anchors, accounting for nearly 70% of pharmaceutical drug targets. Difficulties in efficient enrichment, extraction, and solubilization still exist because of their relatively low abundance and poor solubility. A simplified membrane protein extraction approach with advantages of user-friendly sample processing procedures, good repeatability and significant effectiveness was developed in the current research for enhancing enrichment and identification of membrane proteins. This approach combining centrifugation and detergent along with LC-MS/MS successfully identified higher proportion of membrane proteins, integral proteins and transmembrane proteins in membrane fraction (76.6%, 48.1%, and 40.6%) than in total cell lysate (41.6%, 16.4%, and 13.5%), respectively. Moreover, our method tended to capture membrane proteins with high degree of hydrophobicity and number of transmembrane domains as 486 out of 2106 (23.0%) had GRAVY > 0 in membrane fraction, 488 out of 2106 (23.1%) had TMs ≥ 2. It also provided for improved identification of membrane proteins as more than 60.6% of the commonly identified membrane proteins in two cell samples were better identified in membrane fraction with higher sequence coverage. Data are available via ProteomeXchange with identifier PXD008456.

  4. The computer-aided design of a servo system as a multiple-criteria decision problem

    NARCIS (Netherlands)

    Udink ten Cate, A.J.

    1986-01-01

    This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies

  5. A new type of simplified fuzzy rule-based system

    Science.gov (United States)

    Angelov, Plamen; Yager, Ronald

    2012-02-01

    Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.

  6. User's guide for simplified computer models for the estimation of long-term performance of cement-based materials

    International Nuclear Information System (INIS)

    Plansky, L.E.; Seitz, R.R.

    1994-02-01

    This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code

  7. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  8. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    Science.gov (United States)

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  9. Experimental and Computational Study of the Flow past a Simplified Geometry of an Engine/Pylon/Wing Installation at low velocity/moderate incidence flight conditions

    Science.gov (United States)

    Bury, Yannick; Lucas, Matthieu; Bonnaud, Cyril; Joly, Laurent; ISAE Team; Airbus Team

    2014-11-01

    We study numerically and experimentally the vortices that develop past a model geometry of a wing equipped with pylon-mounted engine at low speed/moderate incidence flight conditions. For such configuration, the presence of the powerplant installation under the wing initiates a complex, unsteady vortical flow field at the nacelle/pylon/wing junctions. Its interaction with the upper wing boundary layer causes a drop of aircraft performances. In order to decipher the underlying physics, this study is initially conducted on a simplified geometry at a Reynolds number of 200000, based on the chord wing and on the freestream velocity. Two configurations of angle of attack and side-slip angle are investigated. This work relies on unsteady Reynolds Averaged Navier Stokes computations, oil flow visualizations and stereoscopic Particle Image Velocimetry measurements. The vortex dynamics thus produced is described in terms of vortex core position, intensity, size and turbulent intensity thanks to a vortex tracking approach. In addition, the analysis of the velocity flow fields obtained from PIV highlights the influence of the longitudinal vortex initiated at the pylon/wing junction on the separation process of the boundary layer near the upper wing leading-edge.

  10. Conceptual design of pipe whip restraints using interactive computer analysis

    International Nuclear Information System (INIS)

    Rigamonti, G.; Dainora, J.

    1975-01-01

    Protection against pipe break effects necessitates a complex interaction between failure mode analysis, piping layout, and structural design. Many iterations are required to finalize structural designs and equipment arrangements. The magnitude of the pipe break loads transmitted by the pipe whip restraints to structural embedments precludes the application of conservative design margins. A simplified analytical formulation of the nonlinear dynamic problems associated with pipe whip has been developed and applied using interactive computer analysis techniques. In the dynamic analysis, the restraint and the associated portion of the piping system, are modeled using the finite element lumped mass approach to properly reflect the dynamic characteristics of the piping/restraint system. The analysis is performed as a series of piecewise linear increments. Each of these linear increments is terminated by either formation of plastic conditions or closing/opening of gaps. The stiffness matrix is modified to reflect the changed stiffness characteristics of the system and re-started using the previous boundary conditions. The formation of yield hinges are related to the plastic moment of the section and unloading paths are automatically considered. The conceptual design of the piping/restraint system is performed using interactive computer analysis. The application of the simplified analytical approach with interactive computer analysis results in an order of magnitude reduction in engineering time and computer cost. (Auth.)

  11. Quantum Computing: a Quantum Group Approach

    OpenAIRE

    Wang, Zhenghan

    2013-01-01

    There is compelling theoretical evidence that quantum physics will change the face of information science. Exciting progress has been made during the last two decades towards the building of a large scale quantum computer. A quantum group approach stands out as a promising route to this holy grail, and provides hope that we may have quantum computers in our future.

  12. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  13. Simplified P{sub n} transport core calculations in the Apollo3 system

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie; Lautard, Jean-Jacques, E-mail: anne-marie.baudron@cea.fr, E-mail: jean-jacques.lautard@cea.fr [Commissariat a l' Energie Atomique et aux Energies Alternatives, CEA Saclay, Gif-sur-Yvette (France)

    2011-07-01

    This paper describes the development of two different neutronics core solvers based on the Simplified P{sub N} transport (SP{sub N}) approximation developed in the context of a new generation nuclear reactor computational system, APOLLO3. Two different approaches have been used. The first one solves the standard SPN system. In the second approach, the SP{sub N} equations are solved as diffusion equations by treating the SP{sub N} flux harmonics like pseudo energy groups, obtained by a change of variable. These two methods have been implemented for Cartesian and hexagonal geometries in the kinetics solver MINOS. The numerical approximation is based on the mixed dual finite formulation and the discretization uses the Raviart-Thomas-Nedelec finite elements. For the unstructured geometries, the SP{sub N} equations are treated by the SN transport solver MINARET by considering the second SP{sub N} approach. The MINARET solver is based on discontinuous Galerkin finite element approximation on cylindrical unstructured meshes composed of a set of conforming triangles for the radial direction. Numerical applications are presented for both solvers in different core configurations (the Jules Horowitz research reactor (JHR) and the Generation IV fast reactor project ESFR). (author)

  14. Simplified polynomial digital predistortion for multimode software defined radios

    DEFF Research Database (Denmark)

    Kardaras, Georgios; Soler, José; Dittmann, Lars

    2010-01-01

    a simplified approach using polynomial digital predistortion in the intermediated frequency (IF) domain. It is fully implementable in software and no hardware changes are required on the digital or analog platform. The adaptation algorithm selected was Least Mean Squares because of its relevant simplicity...

  15. Towards the next generation of simplified Dark Matter models

    CERN Document Server

    Albert, Andreas

    This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. Based on two concrete examples, we show how existing simplified DM models (SDMM) can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. This discovery scenario uses the recently observed excess in the high-mass diphoton searches of...

  16. Computational fluid dynamics a practical approach

    CERN Document Server

    Tu, Jiyuan; Liu, Chaoqun

    2018-01-01

    Computational Fluid Dynamics: A Practical Approach, Third Edition, is an introduction to CFD fundamentals and commercial CFD software to solve engineering problems. The book is designed for a wide variety of engineering students new to CFD, and for practicing engineers learning CFD for the first time. Combining an appropriate level of mathematical background, worked examples, computer screen shots, and step-by-step processes, this book walks the reader through modeling and computing, as well as interpreting CFD results. This new edition has been updated throughout, with new content and improved figures, examples and problems.

  17. Cask crush pad analysis using detailed and simplified analysis methods

    International Nuclear Information System (INIS)

    Uldrich, E.D.; Hawkes, B.D.

    1997-01-01

    A crush pad has been designed and analyzed to absorb the kinetic energy of a hypothetically dropped spent nuclear fuel shipping cask into a 44-ft. deep cask unloading pool at the Fluorinel and Storage Facility (FAST). This facility, located at the Idaho Chemical Processing Plant (ICPP) at the Idaho national Engineering and Environmental Laboratory (INEEL), is a US Department of Energy site. The basis for this study is an analysis by Uldrich and Hawkes. The purpose of this analysis was to evaluate various hypothetical cask drop orientations to ensure that the crush pad design was adequate and the cask deceleration at impact was less than 100 g. It is demonstrated herein that a large spent fuel shipping cask, when dropped onto a foam crush pad, can be analyzed by either hand methods or by sophisticated dynamic finite element analysis using computer codes such as ABAQUS. Results from the two methods are compared to evaluate accuracy of the simplified hand analysis approach

  18. Investigation on the optimal simplified model of BIW structure using FEM

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Shojaeefard

    Full Text Available Abstract At conceptual phases of designing a vehicle, engineers need simplified models to examine the structural and functional characteristics and apply custom modifications for achieving the best vehicle design. Using detailed finite-element (FE model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body-in-white (BIW, composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the extent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be regarded for simplification to optimize costs and time, while providing acceptable accuracy. Although different approaches for optimization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best models, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which examines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experimental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE-modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.

  19. Simplified method of computation for fatigue crack growth

    International Nuclear Information System (INIS)

    Stahlberg, R.

    1978-01-01

    A procedure is described for drastically reducing the computation time in calculating crack growth for variable-amplitude fatigue loading when the loading sequence is periodic. By the proposed procedure, the crack growth, r, per loading is approximated as a smooth function and its reciprocal is integrated, rather than summing crack growth cycle by cycle. The savings in computation time results since only a few pointwise values of r must be computed to generate an accurate interpolation function for numerical integration. Further time savings can be achieved by selecting the stress intensity coefficient (stress intensity divided by load) as the argument of r. Once r has been obtained as a function of stress intensity coefficient for a given material, environment, and loading sequence, it applies to any configuration of cracked structure. (orig.) [de

  20. A simplified multi-particle model for lithium ion batteries via a predictor-corrector strategy and quasi-linearization

    International Nuclear Information System (INIS)

    Li, Xiaoyu; Fan, Guodong; Rizzoni, Giorgio; Canova, Marcello; Zhu, Chunbo; Wei, Guo

    2016-01-01

    The design of a simplified yet accurate physics-based battery model enables researchers to accelerate the processes of the battery design, aging analysis and remaining useful life prediction. In order to reduce the computational complexity of the Pseudo Two-Dimensional mathematical model without sacrificing the accuracy, this paper proposes a simplified multi-particle model via a predictor-corrector strategy and quasi-linearization. In this model, a predictor-corrector strategy is used for updating two internal states, especially used for solving the electrolyte concentration approximation to reduce the computational complexity and reserve a high accuracy of the approximation. Quasi-linearization is applied to the approximations of the Butler-Volmer kinetics equation and the pore wall flux distribution to predict the non-uniform electrochemical reaction effects without using any nonlinear iterative solver. Simulation and experimental results show that the isothermal model and the model coupled with thermal behavior are greatly improve the computational efficiency with almost no loss of accuracy. - Highlights: • A simplified multi-particle model with high accuracy and computation efficiency is proposed. • The electrolyte concentration is solved based on a predictor-corrector strategy. • The non-uniform electrochemical reaction is solved based on quasi-linearization. • The model is verified by simulations and experiments at various operating conditions.

  1. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  2. A highly simplified 3D BWR benchmark problem

    International Nuclear Information System (INIS)

    Douglass, Steven; Rahnema, Farzad

    2010-01-01

    The resurgent interest in reactor development associated with the nuclear renaissance has paralleled significant advancements in computer technology, and allowed for unprecedented computational power to be applied to the numerical solution of neutron transport problems. The current generation of core-level solvers relies on a variety of approximate methods (e.g. nodal diffusion theory, spatial homogenization) to efficiently solve reactor problems with limited computer power; however, in recent years, the increased availability of high-performance computer systems has created an interest in the development of new methods and codes (deterministic and Monte Carlo) to directly solve whole-core reactor problems with full heterogeneity (lattice and core level). This paper presents the development of a highly simplified heterogeneous 3D benchmark problem with physics characteristic of boiling water reactors. The aim of this work is to provide a problem for developers to use to validate new whole-core methods and codes which take advantage of the advanced computational capabilities that are now available. Additionally, eigenvalues and an overview of the pin fission density distribution are provided for the benefit of the reader. (author)

  3. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  4. Simplifying the parallelization of scientific codes by a function-centric approach in Python

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Cai Xing; Langtangen, Hans Petter; Hoeyland, Bjoern

    2010-01-01

    The purpose of this paper is to show how existing scientific software can be parallelized using a separate thin layer of Python code where all parallelization-specific tasks are implemented. We provide specific examples of such a Python code layer, which can act as templates for parallelizing a wide set of serial scientific codes. The use of Python for parallelization is motivated by the fact that the language is well suited for reusing existing serial codes programmed in other languages. The extreme flexibility of Python with regard to handling functions makes it very easy to wrap up decomposed computational tasks of a serial scientific application as Python functions. Many parallelization-specific components can be implemented as generic Python functions, which may take as input those wrapped functions that perform concrete computational tasks. The overall programming effort needed by this parallelization approach is limited, and the resulting parallel Python scripts have a compact and clean structure. The usefulness of the parallelization approach is exemplified by three different classes of application in natural and social sciences.

  5. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  6. Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.

    Science.gov (United States)

    Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal

    2016-12-01

    In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for

  7. A simplified approach to WWER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    Muehlbauer, P.

    2010-01-01

    The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation

  8. Simplified analysis for liquid pathway studies

    International Nuclear Information System (INIS)

    Codell, R.B.

    1984-08-01

    The analysis of the potential contamination of surface water via groundwater contamination from severe nuclear accidents is routinely calculated during licensing reviews. This analysis is facilitated by the methods described in this report, which is codified into a BASIC language computer program, SCREENLP. This program performs simplified calculations for groundwater and surface water transport and calculates population doses to potential users for the contaminated water irrespective of possible mitigation methods. The results are then compared to similar analyses performed using data for the generic sites in NUREG-0440, Liquid Pathway Generic Study, to determine if the site being investigated would pose any unusual liquid pathway hazards

  9. Simplified scheme or radioactive plume calculations

    International Nuclear Information System (INIS)

    Gibson, T.A.; Montan, D.N.

    1976-01-01

    A simplified mathematical scheme to estimate external whole-body γ radiation exposure rates from gaseous radioactive plumes was developed for the Rio Blanco Gas Field Nuclear Stimulation Experiment. The method enables one to calculate swiftly, in the field, downwind exposure rates knowing the meteorological conditions and γ radiation exposure rates measured by detectors positioned near the plume source. The method is straightforward and easy to use under field conditions without the help of mini-computers. It is applicable to a wide range of radioactive plume situations. It should be noted that the Rio Blanco experiment was detonated on May 17, 1973, and no seep or release of radioactive material occurred

  10. Toward exascale computing through neuromorphic approaches.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  11. Time-dependent simplified PN approximation to the equations of radiative transfer

    International Nuclear Information System (INIS)

    Frank, Martin; Klar, Axel; Larsen, Edward W.; Yasuda, Shugo

    2007-01-01

    The steady-state simplified P N approximation to the radiative transport equation has been successfully applied to many problems involving radiation. This paper presents the derivation of time-dependent simplified P N (SP N ) equations (up to N = 3) via two different approaches. First, we use an asymptotic analysis, similar to the asymptotic derivation of the steady-state SP N equations. Second, we use an approach similar to the original derivation of the steady-state SP N equations and we show that both approaches lead to similar results. Special focus is put on the well-posedness of the equations and the question whether it can be guaranteed that the solution satisfies the correct physical bounds. Several numerical test cases are shown, including an analytical benchmark due to Su and Olson [B. Su, G.L. Olson, An analytical benchmark for non-equilibrium radiative transfer in an isotropically scattering medium, Ann. Nucl. Energy 24 (1997) 1035-1055.

  12. Updated thermal model using simplified short-wave radiosity calculations

    International Nuclear Information System (INIS)

    Smith, J.A.; Goltz, S.M.

    1994-01-01

    An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)

  13. Updated thermal model using simplified short-wave radiosity calculations

    Energy Technology Data Exchange (ETDEWEB)

    Smith, J. A.; Goltz, S. M.

    1994-02-15

    An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)

  14. Simplified Model of Safety Determination Process for a Country with its First Operating Nuclear Power Plants

    International Nuclear Information System (INIS)

    Saud, Bin Khadim; Chung, Dae Wook

    2013-01-01

    The two inputs are evaluated and given a color designation based on their safety significance. The performance indicators (PIs) in ROP program were developed from a very large statistical basis given operating experience from 100 reactors over a long period of time. The inspection findings are evaluated in terms of changes in core damage frequency using simplified PRA models and in some cases more complex models. The aim of this paper is to develop a simplified risk assessment approach for inspection findings which does not use PRA directly, but may use direct calculation approach. Thus, it would be helpful for inspectors to determine the safety significance of inspection findings. The objective of this study was to develop a simplified risk assessment approach for inspection findings using direct risk calculation model to determine the safety significance. Risk and categorization scheme are developed to put inspection finding into corresponding ΔCDF category

  15. Simplified Model of Safety Determination Process for a Country with its First Operating Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Saud, Bin Khadim [Korea Advance Institute of Science and Technology, Daejeon (Korea, Republic of); Chung, Dae Wook [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-10-15

    The two inputs are evaluated and given a color designation based on their safety significance. The performance indicators (PIs) in ROP program were developed from a very large statistical basis given operating experience from 100 reactors over a long period of time. The inspection findings are evaluated in terms of changes in core damage frequency using simplified PRA models and in some cases more complex models. The aim of this paper is to develop a simplified risk assessment approach for inspection findings which does not use PRA directly, but may use direct calculation approach. Thus, it would be helpful for inspectors to determine the safety significance of inspection findings. The objective of this study was to develop a simplified risk assessment approach for inspection findings using direct risk calculation model to determine the safety significance. Risk and categorization scheme are developed to put inspection finding into corresponding ΔCDF category.

  16. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    Science.gov (United States)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  17. Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes

    Directory of Open Access Journals (Sweden)

    Dib M.

    2018-01-01

    Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.

  18. Non-simplified SUSY. {tau}-coannihilation at LHC and ILC

    Energy Technology Data Exchange (ETDEWEB)

    Berggren, M.; Cakir, A.; Krueger, D.; List, J.; Lobanov, A.; Melzer-Pellmann, I.A.

    2013-07-15

    Simplified models have become a widely used and important tool to cover the more diverse phenomenology beyond constrained SUSY models. However, they come with a substantial number of caveats themselves, and great care needs to be taken when drawing conclusions from limits based on the simplified approach. To illustrate this issue with a concrete example, we examine the applicability of simplified model results to a series of full SUSY model points which all feature a small {tau} -LSP mass difference, and are compatible with electroweak and flavor precision observables as well as current LHC results. Various channels have been studied using the Snowmass Combined LHC detector implementation in the Delphes simulation package, as well as the Letter of Intent or Technical Design Report simulations of the ILD detector concept at the ILC. We investigated both the LHC and ILC capabilities for discovery, separation and identification of all parts of the spectrum. While parts of the spectrum would be discovered at the LHC, there is substantial room for further discoveries and property determination at the ILC.

  19. Study of the long-term values and prices of plutonium; a simplified parametrized model

    International Nuclear Information System (INIS)

    Gaussens, J.; Paillot, H.

    1965-01-01

    The authors define the notions of use values and price of plutonium. They give a 'simplified parametrized model' simulating the equilibrium of the offer and the demand in time, concerning the plutonium and the price deriving from the relative scarcity of this metal, taking into account the technical and economic operating parameters of the various reactors confronted. This model is simple enough to allow direct computations and establish clear relations between the various parameters. The use of the linear programmes method allows on the other hand a wide extension of the model. This report includes three main parts: I - General description of the study (without detailed calculations) II - Mathematical development of the simplified parametrized model and application (the basic data and the results of the calculations are given) III - Appendices (giving the detailed computations of part II). (authors) [fr

  20. Cosmological helium production simplified

    International Nuclear Information System (INIS)

    Bernstein, J.; Brown, L.S.; Feinberg, G.

    1988-01-01

    We present a simplified model of helium synthesis in the early universe. The purpose of the model is to explain clearly the physical ideas relevant to the cosmological helium synthesis, in a manner that does not overlay these ideas with complex computer calculations. The model closely follows the standard calculation, except that it neglects the small effect of Fermi-Dirac statistics for the leptons. We also neglect the temperature difference between photons and neutrinos during the period in which neutrons and protons interconvert. These approximations allow us to express the neutron-proton conversion rates in a closed form, which agrees to 10% accuracy or better with the exact rates. Using these analytic expressions for the rates, we reduce the calculation of the neutron-proton ratio as a function of temperature to a simple numerical integral. We also estimate the effect of neutron decay on the helium abundance. Our result for this quantity agrees well with precise computer calculations. We use our semi-analytic formulas to determine how the predicted helium abundance varies with such parameters as the neutron life-time, the baryon to photon ratio, the number of neutrino species, and a possible electron-neutrino chemical potential. 19 refs., 1 fig., 1 tab

  1. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  2. A quick, simplified approach to the evaluation of combustion rate from an internal combustion engine indicator diagram

    Directory of Open Access Journals (Sweden)

    Tomić Miroljub V.

    2008-01-01

    Full Text Available In this paper a simplified procedure of an internal combustion engine in-cylinder pressure record analysis has been presented. The method is very easy for programming and provides quick evaluation of the gas temperature and the rate of combustion. It is based on the consideration proposed by Hohenberg and Killman, but enhances the approach by involving the rate of heat transferred to the walls that was omitted in the original approach. It enables the evaluation of the complete rate of heat released by combustion (often designated as “gross heat release rate” or “fuel chemical energy release rate”, not only the rate of heat transferred to the gas (which is often designated as “net heat release rate”. The accuracy of the method has been also analyzed and it is shown that the errors caused by the simplifications in the model are very small, particularly if the crank angle step is also small. A several practical applications on recorded pressure diagrams taken from both spark ignition and compression ignition engine are presented as well.

  3. Simplified methodology for Angra 1 containment analysis

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1991-08-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminary evaluation of the Angra 1 global parameters. (author)

  4. Simplified likelihood for the re-interpretation of public CMS results

    CERN Document Server

    The CMS Collaboration

    2017-01-01

    In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.

  5. A simplified approach for evaluating secondary stresses in elevated temperature design

    International Nuclear Information System (INIS)

    Becht, C.

    1983-01-01

    Control of secondary stresses is important for long-term reliability of components, particularly at elevated temperatures where substantial creep damage can occur and result in cracking. When secondary stresses are considered in the design of elevated temperature components, these are often addressed by the criteria contained in Nuclear Code Case N-47 for use with elastic or inelastic analysis. The elastic rules are very conservative as they bound a large range of complex phenomena; because of this conservatism, only components in relatively mild services can be designed in accordance with these rules. The inelastic rules, although more accurate, require complex and costly nonlinear analysis. Elevated temperature shakedown is a recognized phenomenon that has been considered in developing Code rules and simplified methods. This paper develops and examines the implications of using a criteria which specifically limits stresses to the shakedown regime. Creep, fatigue, and strain accumulation are considered. The effect of elastic follow-up on the conservatism of the criteria is quantified by means of a simplified method. The level of conservatism is found to fall between the elastic and inelastic rules of N-47 and, in fact, the incentives for performing complex inelastic analyses appear to be low except in the low cycle regime. The criteria has immediate applicability to non-code components such as vessel internals in the chemical, petroleum, and synfuels industry. It is suggested that such a criteria be considered in future code rule development

  6. 6-tips diet: a simplified dietary approach in patients with chronic renal disease. A clinical randomized trial.

    Science.gov (United States)

    Pisani, Antonio; Riccio, Eleonora; Bellizzi, Vincenzo; Caputo, Donatella Luciana; Mozzillo, Giusi; Amato, Marco; Andreucci, Michele; Cianciaruso, Bruno; Sabbatini, Massimo

    2016-06-01

    The beneficial effects of dietary restriction of proteins in chronic kidney disease are widely recognized; however, poor compliance to prescribed low-protein diets (LPD) may limit their effectiveness. To help patients to adhere to the dietary prescriptions, interventions as education programmes and dietary counselling are critical, but it is also important to develop simple and attractive approaches to the LPD, especially when dietitians are not available. Therefore, we elaborated a simplified and easy to manage dietary approach consisting of 6 tips (6-tip diet, 6-TD) which could replace the standard, non-individualized LPD in Nephrology Units where dietary counselling is not available; hence, our working hypothesis was to evaluate the effects of such diet vs a standard moderately protein-restricted diet on metabolic parameters and patients' adherence. In this randomized trial, 57 CKD patients stage 3b-5 were randomly assigned (1:1) to receive the 6-TD (Group 6-TD) or a LPD containing 0.8 g/kg/day of proteins (Group LPD) for 6 months. The primary endpoint was to evaluate the effects of the two different diets on the main "metabolic" parameters and on patients' adherence (registration number NCT01865526). Both dietary regimens were associated with a progressive reduction in protein intake and urinary urea excretion compared to baseline, although the decrease was more pronounced in Group 6-TD. Effects on serum levels of urea nitrogen and urinary phosphate excretion were greater in Group 6-TD. Plasma levels of phosphate, bicarbonate and PTH, and urinary NaCl excretion remained stable in both groups throughout the study. 44 % of LPD patients were adherent to the dietary prescription vs 70 % of Group 6-TD. A simplified diet, consisting of 6 clear points easily managed by CKD patients, produced beneficial effects either on the metabolic profile of renal disease and on patients' adherence to the dietary plan, when compared to a standard LPD.

  7. Study of the long-term values and prices of plutonium; a simplified parametrized model; Etude des valeurs et des prix du plutonium a long terme; un modele parametre simplifie

    Energy Technology Data Exchange (ETDEWEB)

    Gaussens, J; Paillot, H [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    The authors define the notions of use values and price of plutonium. They give a 'simplified parametrized model' simulating the equilibrium of the offer and the demand in time, concerning the plutonium and the price deriving from the relative scarcity of this metal, taking into account the technical and economic operating parameters of the various reactors confronted. This model is simple enough to allow direct computations and establish clear relations between the various parameters. The use of the linear programmes method allows on the other hand a wide extension of the model. This report includes three main parts: I - General description of the study (without detailed calculations) II - Mathematical development of the simplified parametrized model and application (the basic data and the results of the calculations are given) III - Appendices (giving the detailed computations of part II). (authors) [French] Les auteurs definissent les notions de valeurs d'usage et de prix du plutonium. Ils donnent un 'modele parametre simplifie' simulant l'equilibre de l'office et de la demande dans le temps concernant le plutonium et le prix qui decoule de la rarete relative de ce metal, compte tenu des parametres techniques et economiques de fonctionnement des divers reacteurs en presence. Ce modele est suffisamment simple pour permettre des calculs manuels et etablir des liaisons claires entre les divers parametres. L'utilisation de la technique des programmes lineaires permet par ailleurs une extension considerable du modele. Cette note comprend trois parties: I - Expose general de l'etude (sans expose du detail des calculs) II - Developpement mathematique du modele parametre simplifie et application (on precise les donnees de base et le resultat des calculs) III - Annexes (donnant le detail des calculs de la partie II). (auteurs)

  8. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    Science.gov (United States)

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Differential forms on singular varieties De Rham and Hodge theory simplified

    CERN Document Server

    Ancona, Vincenzo

    2005-01-01

    Differential Forms on Singular Varieties: De Rham and Hodge Theory Simplified uses complexes of differential forms to give a complete treatment of the Deligne theory of mixed Hodge structures on the cohomology of singular spaces. This book features an approach that employs recursive arguments on dimension and does not introduce spaces of higher dimension than the initial space. It simplifies the theory through easily identifiable and well-defined weight filtrations. It also avoids discussion of cohomological descent theory to maintain accessibility. Topics include classical Hodge theory, differential forms on complex spaces, and mixed Hodge structures on noncompact spaces.

  10. Convergence Analysis of a Class of Computational Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  11. Simplified modeling photo-ionisation of uranium in Silva project

    International Nuclear Information System (INIS)

    Bodin, B.; Pourre-Brichot, P.; Valadier, L.

    2000-01-01

    SILVA is a process which targets 235 U by photo-ionization. It is therefore important to compute the proportion of ionized atoms depending on the properties of the lasers. The interaction between atoms and lasers occurs via the link between the Maxwell and Schroedinger equations. This kind of approach is only feasible for a few simple cases: e.g. wave plane or simple laser profiling. Introducing the characteristics of SILVA, computation time increases substantially (several hundred days per kilogram of vapor). To circumvent this problem, we wrote a program (Jackpot) that treats photo-ionization by a simplified model: kinetics equations. However, various optical components were introduced with absorption factor by wavelength, to account for the effects of optics systems on the trajectory. Instead of seeking the complex wavefunction solutions of the Maxwell-Schroedinger equations, we solve a system where the unknown values are a set of populations. The size of the set depends only on the number of hold points in the process. Recent work shows that we can converge towards the same results as the Maxwell-Schroedinger system if we can fit the cross-sections of the kinetic system correctly. As to the optical aspect, Jackpot can handle diffraction. In this case, it solves the propagation equation of an electric field by a double Fourier transform method. For interactions with mirrors, the new direction of a ray is calculated with Descartes law, applying a numerical phase mask to the electric field. We account for diaphragmation mechanisms as well as the absorption law for each mirror, by a real factor by wavelength. Jackpot is simple to use and can be used to predict experimental results. Jackpot is now a library calling by a script written in Python. Changes are being made for a closer approach to reality (real laser, new photo-ionization model)

  12. A Novel Interference Detection Method of STAP Based on Simplified TT Transform

    Directory of Open Access Journals (Sweden)

    Qiang Wang

    2017-01-01

    Full Text Available Training samples contaminated by target-like signals is one of the major reasons for inhomogeneous clutter environment. In such environment, clutter covariance matrix in STAP (space-time adaptive processing is estimated inaccurately, which finally leads to detection performance reduction. In terms of this problem, a STAP interference detection method based on simplified TT (time-time transform is proposed in this letter. Considering the sparse physical property of clutter in the space-time plane, data on each range cell is first converted into a discrete slow time series. Then, the expression of simplified TT transform about sample data is derived step by step. Thirdly, the energy of each training sample is focalized and extracted by simplified TT transform from energy-variant difference between the unpolluted and polluted stage, and the physical significance of discarding the contaminated samples is analyzed. Lastly, the contaminated samples are picked out in light of the simplified TT transform-spectrum difference. The result on Monte Carlo simulation indicates that when training samples are contaminated by large power target-like signals, the proposed method is more effective in getting rid of the contaminated samples, reduces the computational complexity significantly, and promotes the target detection performance compared with the method of GIP (generalized inner product.

  13. Cultural Distance-Aware Service Recommendation Approach in Mobile Edge Computing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2018-01-01

    Full Text Available In the era of big data, traditional computing systems and paradigms are not efficient and even difficult to use. For high performance big data processing, mobile edge computing is emerging as a complement framework of cloud computing. In this new computing architecture, services are provided within a close proximity of mobile users by servers at the edge of network. Traditional collaborative filtering recommendation approach only focuses on the similarity extracted from the rating data, which may lead to an inaccuracy expression of user preference. In this paper, we propose a cultural distance-aware service recommendation approach which focuses on not only the similarity but also the local characteristics and preference of users. Our approach employs the cultural distance to express the user preference and combines it with similarity to predict the user ratings and recommend the services with higher rating. In addition, considering the extreme sparsity of the rating data, missing rating prediction based on collaboration filtering is introduced in our approach. The experimental results based on real-world datasets show that our approach outperforms the traditional recommendation approaches in terms of the reliability of recommendation.

  14. A comparative approach to closed-loop computation.

    Science.gov (United States)

    Roth, E; Sponberg, S; Cowan, N J

    2014-04-01

    Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. [Misinterpretation of the anteversion in computer-assisted acetabular cup navigation as a result of a simplified palpation method of the frontal pelvic plane].

    Science.gov (United States)

    Richolt, J A; Rittmeister, M E

    2006-01-01

    Computer assisted navigation of the acetabular cup in THR requires reliable digitalisation of bony landmarks defining the frontal pelvic plane by user driven palpation. According to the system recommendations the subcutaneous fat should be held aside during epicutaneous digitalization. To improve intraoperative practicability this is often neglected in the symphysis area. In these cases the fat is just compressed and not pushed aside. In this study soft tissue thickness was assessed by ultrasound and pelvic geometry was measured in 72 patients to quantify potential misinterpretation of cup anteversion triggered by the simplified palpation. As reference we employed data of the same patients that had been acquired by recommended palpation. Anteversion misinterpretation averaged at 8.2 degrees with extremes from 2 to 24 degrees. There were no correlations between soft tissue thickness or misinterpretation and body weight, height and pelvic size. Anteversion misinterpretation was highly significant worse compared to the reference data. In 31 % of the patients the anteversion misinterpretation of a navigation system would have been wrong by over 10 degrees and in 81 % over 5 degrees . Therefore the simplified palpation should not be utilized. For epicutaneous digitalization of the bony landmarks it is mandatory to push the subcutaneous fat aside.

  16. Comparison of Detailed and Simplified Models of Human Atrial Myocytes to Recapitulate Patient Specific Properties.

    Directory of Open Access Journals (Sweden)

    Daniel M Lombardo

    2016-08-01

    Full Text Available Computer studies are often used to study mechanisms of cardiac arrhythmias, including atrial fibrillation (AF. A crucial component in these studies is the electrophysiological model that describes the membrane potential of myocytes. The models vary from detailed, describing numerous ion channels, to simplified, grouping ionic channels into a minimal set of variables. The parameters of these models, however, are determined across different experiments in varied species. Furthermore, a single set of parameters may not describe variations across patients, and models have rarely been shown to recapitulate critical features of AF in a given patient. In this study we develop physiologically accurate computational human atrial models by fitting parameters of a detailed and of a simplified model to clinical data for five patients undergoing ablation therapy. Parameters were simultaneously fitted to action potential (AP morphology, action potential duration (APD restitution and conduction velocity (CV restitution curves in these patients. For both models, our fitting procedure generated parameter sets that accurately reproduced clinical data, but differed markedly from published sets and between patients, emphasizing the need for patient-specific adjustment. Both models produced two-dimensional spiral wave dynamics for that were similar for each patient. These results show that simplified, computationally efficient models are an attractive choice for simulations of human atrial electrophysiology in spatially extended domains. This study motivates the development and validation of patient-specific model-based mechanistic studies to target therapy.

  17. Simplified scaling model for the THETA-pinch

    Science.gov (United States)

    Ewing, K. J.; Thomson, D. B.

    1982-02-01

    A simple ID scaing model for the fast Theta pinch was developed and written as a code that would be flexible, inexpensive in computer time, and readily available for use with the Los Alamos explosive-driven high magnetic field program. The simplified model uses three successive separate stages: (1) a snowplow-like radial implosion, (2) an idealized resistive annihilation of reverse bias field, and (3) an adiabatic compression stage of a Beta = 1 plasma for which ideal pressure balance is assumed to hold. The code uses one adjustable fitting constant whose value was first determined by comparison with results from the Los Alamos Scylla III, Scyllacita, and Scylla IA Theta pinches.

  18. Computational Approaches to Nucleic Acid Origami.

    Science.gov (United States)

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  19. Representing solute transport through the multi-barrier disposal system by simplified concepts

    International Nuclear Information System (INIS)

    Poteri, A.; Nordman, H.; Pulkkanen, V-M.; Kekaelaeinen, P.; Hautojaervi, A.

    2012-02-01

    The repository system chosen in Finland for spent nuclear fuel is composed of multiple successive transport barriers. If a waste canister is leaking, this multi-barrier system retards and limits the release rates of radionuclides into the biosphere. Analysis of radionuclide migration in the previous performance assessments has largely been based on numerical modelling of the repository system. The simplified analytical approach introduced here provides a tool to analyse the performance of the whole system using simplified representations of the individual transport barriers. This approach is based on the main characteristics of the individual barriers and on the generic nature of the coupling between successive barriers. In the case of underground repository the mass transfer between successive transport barriers is strongly restricted by the interfaces between barriers leading to well-mixed conditions in these barriers. The approach here simplifies the barrier system so that it can be described with a very simple compartment model, where each barrier is represented by a single, or in the case of buffer, by not more than two compartments. This system of compartments could be solved in analogy with a radioactive decay chain. The model of well mixed compartments lends itself to a very descriptive way to represent and analyse the barrier system because the relative efficiency of the different barriers in hindering transport of solutes can be parameterised by the solutes half-times in the corresponding compartments. In a real repository system there will also be a delay between the start of the inflow and the start of the outflow from the barrier. This delay can be important for the release rates of the short lived and sorbing radionuclides, and it was also included in the simplified representation of the barrier system. In a geological multi-barrier system, spreading of the outflowing release pulse is often governed by the typical behaviour of one transport barrier

  20. Representing solute transport through the multi-barrier disposal system by simplified concepts

    Energy Technology Data Exchange (ETDEWEB)

    Poteri, A.; Nordman, H.; Pulkkanen, V-M. [VTT Technical Research Centre of Finland, Espoo (Finland); Kekaelaeinen, P. [Jyvaeskylae Univ. (Finland). Dept. pf Physics; Hautojaervi, A.

    2012-02-15

    The repository system chosen in Finland for spent nuclear fuel is composed of multiple successive transport barriers. If a waste canister is leaking, this multi-barrier system retards and limits the release rates of radionuclides into the biosphere. Analysis of radionuclide migration in the previous performance assessments has largely been based on numerical modelling of the repository system. The simplified analytical approach introduced here provides a tool to analyse the performance of the whole system using simplified representations of the individual transport barriers. This approach is based on the main characteristics of the individual barriers and on the generic nature of the coupling between successive barriers. In the case of underground repository the mass transfer between successive transport barriers is strongly restricted by the interfaces between barriers leading to well-mixed conditions in these barriers. The approach here simplifies the barrier system so that it can be described with a very simple compartment model, where each barrier is represented by a single, or in the case of buffer, by not more than two compartments. This system of compartments could be solved in analogy with a radioactive decay chain. The model of well mixed compartments lends itself to a very descriptive way to represent and analyse the barrier system because the relative efficiency of the different barriers in hindering transport of solutes can be parameterised by the solutes half-times in the corresponding compartments. In a real repository system there will also be a delay between the start of the inflow and the start of the outflow from the barrier. This delay can be important for the release rates of the short lived and sorbing radionuclides, and it was also included in the simplified representation of the barrier system. In a geological multi-barrier system, spreading of the outflowing release pulse is often governed by the typical behaviour of one transport barrier

  1. Anatomical recommendations for safe botulinum toxin injection into temporalis muscle: a simplified reproducible approach.

    Science.gov (United States)

    Lee, Won-Kang; Bae, Jung-Hee; Hu, Kyung-Seok; Kato, Takafumi; Kim, Seong-Taek

    2017-03-01

    The objective of this study was to simplify the anatomically safe and reproducible approach for BoNT injection and to generate a detailed topographic map of the important anatomical structures of the temporal region by dividing the temporalis into nine equally sized compartments. Nineteen sides of temporalis muscle were used. The topographies of the superficial temporal artery, middle temporal vein, temporalis tendon, and the temporalis muscle were evaluated. Also evaluated was the postural relations among the foregoing anatomical structures in the temporalis muscle, pivoted upon a total of nine compartments. The temporalis above the zygomatic arch exhibited an oblique quadrangular shape with rounded upper right and left corners. The distance between the anterior and posterior margins of the temporalis muscle was equal to the width of the temporalis rectangle, and the distance between the reference line and the superior temporalis margin was equal to its height. The mean ratio of width to height was 5:4. We recommend compartments Am, Mu, and Pm (coordinates of the rectangular outline) as areas in the temporal region for BoNT injection, because using these sites will avoid large blood vessels and tendons, thus improving the safety and reproducibility of the injection.

  2. Simplified phenomenology for colored dark sectors

    Energy Technology Data Exchange (ETDEWEB)

    Hedri, Sonia El; Kaminska, Anna; Vries, Maikel de [PRISMA Cluster of Excellence & Mainz Institute for Theoretical Physics,Johannes Gutenberg University,55099 Mainz (Germany); Zurita, Jose [Institute for Nuclear Physics (IKP), Karlsruhe Institute of Technology,Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Institute for Theoretical Particle Physics (TTP), Karlsruhe Institute of Technology,Engesserstraße 7, D-76128 Karlsruhe (Germany)

    2017-04-20

    We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.

  3. Study of structural attachments of a pool type LMFBR vessel through seismic analysis of a simplified three dimensional finite element model

    International Nuclear Information System (INIS)

    Ahmed, H.; Ma, D.

    1979-01-01

    A simplified three dimensional finite element model of a pool type LMFBR in conjunction with the computer program ANSYS is developed and scoping results of seismic analysis are produced. Through this study various structural attachments of a pool type LMFBR like the reactor vessel skirt support, the pump support and reactor shell-support structure interfaces are studied. This study also provides some useful results on equivalent viscous damping approach and some improvements to the treatment of equivalent viscous damping are recommended. This study also sets forth pertinent guidelines for detailed three dimensional finite element seismic analysis of pool type LMFBR

  4. Computational neuropharmacology: dynamical approaches in drug discovery.

    Science.gov (United States)

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  5. Simplified probabilistic risk assessment in fuel reprocessing

    International Nuclear Information System (INIS)

    Solbrig, C.W.

    1993-01-01

    An evaluation was made to determine if a backup mass tracking computer would significantly reduce the probability of criticality in the fuel reprocessing of the Integral Fast Reactor. Often tradeoff studies, such as this, must be made that would greatly benefit from a Probably Risk Assessment (PRA). The major benefits of a complete PRA can often be accrued with a Simplified Probabilistic Risk Assessment (SPRA). An SPRA was performed by selecting a representative fuel reprocessing operation (moving a piece of fuel) for analysis. It showed that the benefit of adding parallel computers was small compared to the benefit which could be obtained by adding parallelism to two computer input steps and two of the weighing operations. The probability of an incorrect material moves with the basic process is estimated to be 4 out of 100 moves. The actual values of the probability numbers are considered accurate to within an order of magnitude. The most useful result of developing the fault trees accrue from the ability to determine where significant improvements in the process can be made. By including the above mentioned parallelism, the error move rate can be reduced to 1 out of 1000

  6. An Innovative Approach to Balancing Chemical-Reaction Equations: A Simplified Matrix-Inversion Technique for Determining The Matrix Null Space

    OpenAIRE

    Thorne, Lawrence R.

    2011-01-01

    I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...

  7. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  8. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  9. SHIVGAMI : Simplifying tHe titanIc blastx process using aVailable GAthering of coMputational unIts

    Directory of Open Access Journals (Sweden)

    Naman Mangukia

    2017-10-01

    Full Text Available Assembling novel genomes from scratch is a never ending process unless and until the homo sapiens cover all the living organisms! On top of that, this denovo approach is employed by RNASeq and Metagenomics analysis. Functional identification of the scaffolds or transcripts from such drafted assemblies is a substantial step routinely employes a well-known BlastX program which facilitates a user to search DNA query against NCBI-Protein (NR:~120Gb database. In spite of having multicore-processing option, blastX is an elongated process for the bulk of lengthy Queryinputs. Tremendous efforts are constantly being applied to solve this problem by increasing computational power, GPU-Based computing, Cloud computing and Hadoop based approach which ultimately requires gigantic cost in terms of money and processing. To address this issue, here we have come up with SHIVGAMI, which automates the entire process using perl and shell scripts, which divide, distribute and process the input FASTA sequences as per the CPU-cores availability amongst the computational units individually. Linux operating system, NR database and blastX program installations are prerequisites for each system.  The beauty of this stand-alone automation program SHIVGAMI is it requires the LAN connection exactly twice: During ‘query distribution’ and at the time of ‘proces completion’. In initial phase, it divides the fasta sequences according to the individual computer's core-capability. Then it will evenly distribute all the data along with small automation scripts which will run the blastX process to the respective computational unit and send back the results file to the master computer. The master computer finally combines and compiles the files into a single result. This simple automation converts a computer lab into a GRID without investment of any software, hardware and man-power. In short, SHIVGAMI is a time and cost savior tool for all users starting from commercial firm

  10. On the balanced blending of formally structured and simplified approaches for utilizing judgments of experts in the assessment of uncertain issues

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Yang, Joon Eon; Ha, Jae Joo

    2003-01-01

    Expert judgment is frequently employed in the search for the solution to various engineering and decision-making problems where relevant data is not sufficient or where there is little consensus as to the correct models to apply. When expert judgments are required to solve the underlying problem, our main concern is how to formally derive their technical expertise and their personal degree of familiarity about the related questions. Formal methods for gathering judgments from experts and assessing the effects of the judgments on the results of the analysis have been developed in a variety of ways. The most important interest of such methods is to establish the robustness of an expert's knowledge upon which the elicitation of judgments is made and an effective trace of the elicitation process as possible as one can. While the resultant expert judgments can remain to a large extent substantiated with formal elicitation methods, their applicability however is often limited due to restriction of available resources (e.g., time, budget, and number of qualified experts, etc) as well as a scope of the analysis. For this reason, many engineering and decision-making problems have not always performed with a formal/structured pattern, but rather relied on a pertinent transition of the formal process to the simplified approach. The purpose of this paper is (a) to address some insights into the balanced use of formally structured and simplified approaches for the explicit use of expert judgments under resource constraints and (b) to discuss related decision-theoretic issues

  11. Simplified methodology for analysis of Angra-1 containing

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1988-01-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time the variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminar avaliation of the Angra 1 global parameters. (author) [pt

  12. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 1. lithium concentration estimation

    Science.gov (United States)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.

  13. Simplified stock markets described by number operators

    Science.gov (United States)

    Bagarello, F.

    2009-06-01

    In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.

  14. An OOP Approach to Simplify MDI Application Development An OOP Approach to Simplify MDI Application Development

    Directory of Open Access Journals (Sweden)

    Donato Hernández Fusilier

    2012-02-01

    Full Text Available The Multiple Document Interface (MDI is a Microsoft Windows specification that allows managing multiple documents using a single graphic interface application. An MDI application allows opening several documents simultaneously. Only one document is active at a particular time. MDI applications can be deployed using Win32 or Microsoft Foundation Classes (MFC. Programs developed using Win32 are faster than those using MFC. However, Win32applications are difficult to implement and prone to errors. It should be mentioned that, learning how to properly use MFC to deploy MDI applications is not simple, and performance is typically worse than that of Win32 applications. A method to simplify the development of MDI applications using Object-Oriented Programming (OOP is proposed. Subsequently, it is shown that this method generates compact code that is easier to read and maintain than other methods (i.e., MFC. Finally, it is demonstrated that the proposed method allowsthe rapid development of MDI applications without sacrificing application performance.La Interfase para Múltiples Documentos (MDI es una especificación del sistema operativo Microsoft Windows que permite manipular varios documentos usando un sólo programa. Un programa del tipo MDI permite abrir varios documentos simultáneamente. En un instante dado, sólo un documento es activo. Los programas del tipo MDI pueden desarrollarseu sando Win32 o las clases fundamentales de Microsoft (MFC. Los programas desarrollados usando Win32 son más rápidos que los programas que usan MFC. Sin embargo, éstos son difíciles de implementar promoviendo la existencia de errores. Cabe mencionar que el desarrollo de programas del tipo MDI usando MFC no es sencillo, y que su desempeño estípicamente peor que el de un programa del tipo Win32. Se propone un método que drásticamente simplifica el desarrollo de programas del tipo MDI por medio de la Programación Orientada a Objetos (POO. Se demuestra que el m

  15. Office of River Protection: Simplifying Project management tools

    International Nuclear Information System (INIS)

    TAYLOR, D.G.

    2000-01-01

    The primary approach to the effort was to form a multi-organizational team comprised of federal and contractor staff to develop and implement the necessary tools and systems to manage the project. In late 1999 the DOE Manager of the Office of River Protection formed the Project Integration Office to achieve the objective of managing the efforts as a single project. The first major task, and the foundation upon which to base the development of all other tools, was the establishment of a single baseline of activities. However, defining a single scope schedule and cost was a difficult matter indeed. Work scopes were available throughout the project, but the level of detail and the integration of the activities existed primarily between working groups and individuals and not on a project-wide basis. This creates a situation where technical needs, logic flaws, resource balancing, and other similar integration needs are not elevated for management attention and resolution. It should be noted that probably 90% of the interface issues were known and being addressed. The key is simplifying the process and providing tangible assurance that the other 10% does not contain issues that can delay the project. Fortunately all of the contractors employed a common scheduling tool, which served as the basis for first communicating and then integrating baseline activities. Utilizing a powerful computer-based scheduling tool, it was soon possible to integrate the various schedules after the following was accomplished: Establishment of a scheduling specification (standardized input, coding, and approach to logic); and Clearly defined project assumptions

  16. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  17. Simplified Models for Dark Matter and Missing Energy Searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Abdallah, Jalal [Academia Sinica, Taipei (Taiwan). Inst. of Physics; Ashkenazi, Adi [Tel Aviv Univ. (Israel). Dept. of Physics; Boveia, Antonio [Univ. of Chicago, IL (United States). Enrico Fermi Inst.; Busoni, Giorgio [International School for Advanced Studies (SISSA), Trieste (Italy); National Inst. for Nuclear Physics (INFN), Trieste (Italy); De Simone, Andrea [International School for Advanced Studies (SISSA), Trieste (Italy); National Inst. for Nuclear Physics (INFN), Trieste (Italy); Doglioni, Caterina [Univ. of Geneva (Switzerland). Physics Dept.; Efrati, Aielet [Weizmann Inst. of Science, Rehovot (Israel). Dept. of Particle Physics and Astrophysics; Etzion, Erez [Tel Aviv Univ. (Israel). Dept. of Physics; Gramling, Johanna [Univ. of Geneva (Switzerland). Physics Dept.; Jacques, Thomas [Univ. of Geneva (Switzerland). Physics Dept.; Lin, Tongyan [Univ. of Chicago, IL (United States). Kavli Inst. for Cosmological Physics. Enrico Fermi Inst.; Morgante, Enrico [Univ. of Geneva (Switzerland). Physics Dept.; Papucci, Michele [Univ. of California, Berkeley, CA (United States). Berkeley Center for Theoretical Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Theoretical Physics Group; Penning, Bjoern [Univ. of Chicago, IL (United States). Enrico Fermi Inst.; Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Riotto, Antonio Walter [Univ. of Geneva (Switzerland). Physics Dept.; Rizzo, Thomas [SLAC National Accelerator Lab., Menlo Park, CA (United States); Salek, David [National Inst. for Subatomic Physics (NIKHEF), Amsterdam (Netherlands); Gravitation and AstroParticle Physics in Amsterdam (GRAPPA), Amsterdam (Netherlands); Schramm, Steven [Univ. of Toronto, ON (Canada). Dept. of Physics; Slone, Oren [Tel Aviv Univ. (Israel). Dept. of Physics; Soreq, Yotam [Weizmann Inst. of Science, Rehovot (Israel). Dept. of Particle Physics and Astrophysics; Vichi, Alessandro [Univ. of California, Berkeley, CA (United States). Berkeley Center for Theoretical Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Theoretical Physics Group; Volansky, Tomer [Tel Aviv Univ. (Israel). Dept. of Physics; Yavin, Itay [Perimeter Inst. for Theoretical Physics, Waterloo, ON (Canada); McMaster Univ., Hamilton, ON (Canada). Dept. of Physics; Zhou, Ning [Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy; Zurek, Kathryn [Univ. of California, Berkeley, CA (United States). Berkeley Center for Theoretical Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Theoretical Physics Group

    2014-10-01

    The study of collision events with missing energy as searches for the dark matter (DM) component of the Universe are an essential part of the extensive program looking for new physics at the LHC. Given the unknown nature of DM, the interpretation of such searches should be made broad and inclusive. This report reviews the usage of simplified models in the interpretation of missing energy searches. We begin with a brief discussion of the utility and limitation of the effective field theory approach to this problem. The bulk of the report is then devoted to several different simplified models and their signatures, including s-channel and t-channel processes. A common feature of simplified models for DM is the presence of additional particles that mediate the interactions between the Standard Model and the particle that makes up DM. We consider these in detail and emphasize the importance of their inclusion as final states in any coherent interpretation. We also review some of the experimental progress in the field, new signatures, and other aspects of the searches themselves. We conclude with comments and recommendations regarding the use of simplified models in Run-II of the LHC.

  18. Simplified Models for Dark Matter and Missing Energy Searches at the LHC

    International Nuclear Information System (INIS)

    Abdallah, Jalal; De Simone, Andrea; Doglioni, Caterina; Riotto, Antonio Walter; Salek, David; Schramm, Steven; Slone, Oren; Soreq, Yotam; Vichi, Alessandro; Lawrence Berkeley National Lab.; Volansky, Tomer; Yavin, Itay; McMaster Univ., Hamilton, ON; Zhou, Ning; Zurek, Kathryn; Lawrence Berkeley National Lab.

    2014-01-01

    The study of collision events with missing energy as searches for the dark matter (DM) component of the Universe are an essential part of the extensive program looking for new physics at the LHC. Given the unknown nature of DM, the interpretation of such searches should be made broad and inclusive. This report reviews the usage of simplified models in the interpretation of missing energy searches. We begin with a brief discussion of the utility and limitation of the effective field theory approach to this problem. The bulk of the report is then devoted to several different simplified models and their signatures, including s-channel and t-channel processes. A common feature of simplified models for DM is the presence of additional particles that mediate the interactions between the Standard Model and the particle that makes up DM. We consider these in detail and emphasize the importance of their inclusion as final states in any coherent interpretation. We also review some of the experimental progress in the field, new signatures, and other aspects of the searches themselves. We conclude with comments and recommendations regarding the use of simplified models in Run-II of the LHC.

  19. Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues

    International Nuclear Information System (INIS)

    Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Liang, Jimin; Zhang, Qian

    2015-01-01

    Aiming at the limitations of the simplified spherical harmonics approximation (SP N ) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SP N are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SP N and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SP N model as well as a much better accuracy compared with the DE one. (paper)

  20. Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.

    Science.gov (United States)

    Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin

    2015-08-21

    Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.

  1. [Simplified laparoscopic gastric bypass. Initial experience].

    Science.gov (United States)

    Hernández-Miguelena, Luis; Maldonado-Vázquez, Angélica; Cortes-Romano, Pablo; Ríos-Cruz, Daniel; Marín-Domínguez, Raúl; Castillo-González, Armando

    2014-01-01

    Obesity surgery includes various gastrointestinal procedures. Roux-en-Y gastric bypass is the prototype of mixed procedures being the most practiced worldwide. A similar and novel technique has been adopted by Dr. Almino Cardoso Ramos and Dr. Manoel Galvao called "simplified bypass," which has been accepted due to the greater ease and very similar results to the conventional technique. The aim of this study is to describe the results of the simplified gastric bypass for treatment of morbid obesity in our institution. We performed a descriptive, retrospective study of all patients undergoing simplified gastric bypass from January 2008 to July 2012 in the obesity clinic of a private hospital in Mexico City. A total of 90 patients diagnosed with morbid obesity underwent simplified gastric bypass. Complications occurred in 10% of patients; these were more frequent bleeding and internal hernia. Mortality in the study period was 0%. The average weight loss at 12 months was 72.7%. Simplified gastric bypass surgery is safe with good mid-term results and a loss of adequate weight in 71% of cases.

  2. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    Science.gov (United States)

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  3. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  4. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  5. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  6. Simulation of the space debris environment in LEO using a simplified approach

    Science.gov (United States)

    Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico

    2017-01-01

    Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC

  7. Simplifying the circuit of Josephson parametric converters

    Science.gov (United States)

    Abdo, Baleegh; Brink, Markus; Chavez-Garcia, Jose; Keefe, George

    Josephson parametric converters (JPCs) are quantum-limited three-wave mixing devices that can play various important roles in quantum information processing in the microwave domain, including amplification of quantum signals, transduction of quantum information, remote entanglement of qubits, nonreciprocal amplification, and circulation of signals. However, the input-output and biasing circuit of a state-of-the-art JPC consists of bulky components, i.e. two commercial off-chip broadband 180-degree hybrids, four phase-matched short coax cables, and one superconducting magnetic coil. Such bulky hardware significantly hinders the integration of JPCs in scalable quantum computing architectures. In my talk, I will present ideas on how to simplify the JPC circuit and show preliminary experimental results

  8. Turn-based evolution in a simplified model of artistic creative process

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2015-01-01

    Evolutionary computation has often been presented as a possible model for creativity in computers. In this paper, evolution is discussed in the light of a theoretical model of human artistic process, recently presented by the author. Some crucial differences between human artistic creativity......, and the results of initial experiments are presented and discussed. Artistic creativity is here modeled as an iterated turn-based process, alternating between a conceptual representation and a material representation of the work-to-be. Evolutionary computation is proposed as a heuristic solution to the principal...... and natural evolution are observed and discussed, also in the light of other creative processes occurring in nature. As a tractable way to overcome these limitations, a new kind of evolutionary implementation of creativity is proposed, based on a simplified version of the previously presented model...

  9. A Computer-Aided FPS-Oriented Approach for Construction Briefing

    Institute of Scientific and Technical Information of China (English)

    Xiaochun Luo; Qiping Shen

    2008-01-01

    Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.

  10. Simplified tritium permeation model

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1993-01-01

    In this model I seek to provide a simplified approach to solving permeation problems addressed by TMAP4. I will assume that there are m one-dimensional segments with thickness L i , i = 1, 2, hor-ellipsis, m, joined in series with an implantation flux, J i , implanting at the single depth, δ, in the first segment. From material properties and heat transfer considerations, I calculate temperatures at each face of each segment, and from those temperatures I find local diffusivities and solubilities. I assume recombination coefficients K r1 and K r2 are known at the upstream and downstream faces, respectively, but the model will generate Baskes recombination coefficient values on demand. Here I first develop the steady-state concentration equations and then show how trapping considerations can lead to good estimates of permeation transient times

  11. Introducing Computational Approaches in Intermediate Mechanics

    Science.gov (United States)

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  12. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  13. 3.6 simplified methods for design

    International Nuclear Information System (INIS)

    Nickell, R.E.; Yahr, G.T.

    1981-01-01

    Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed

  14. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2017-01-15

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  15. A Simplified Approach to Risk Assessment Based on System Dynamics: An Industrial Case Study.

    Science.gov (United States)

    Garbolino, Emmanuel; Chery, Jean-Pierre; Guarnieri, Franck

    2016-01-01

    Seveso plants are complex sociotechnical systems, which makes it appropriate to support any risk assessment with a model of the system. However, more often than not, this step is only partially addressed, simplified, or avoided in safety reports. At the same time, investigations have shown that the complexity of industrial systems is frequently a factor in accidents, due to interactions between their technical, human, and organizational dimensions. In order to handle both this complexity and changes in the system over time, this article proposes an original and simplified qualitative risk evaluation method based on the system dynamics theory developed by Forrester in the early 1960s. The methodology supports the development of a dynamic risk assessment framework dedicated to industrial activities. It consists of 10 complementary steps grouped into two main activities: system dynamics modeling of the sociotechnical system and risk analysis. This system dynamics risk analysis is applied to a case study of a chemical plant and provides a way to assess the technological and organizational components of safety. © 2016 Society for Risk Analysis.

  16. Artificial cell mimics as simplified models for the study of cell biology.

    Science.gov (United States)

    Salehi-Reyhani, Ali; Ces, Oscar; Elani, Yuval

    2017-07-01

    Living cells are hugely complex chemical systems composed of a milieu of distinct chemical species (including DNA, proteins, lipids, and metabolites) interconnected with one another through a vast web of interactions: this complexity renders the study of cell biology in a quantitative and systematic manner a difficult task. There has been an increasing drive towards the utilization of artificial cells as cell mimics to alleviate this, a development that has been aided by recent advances in artificial cell construction. Cell mimics are simplified cell-like structures, composed from the bottom-up with precisely defined and tunable compositions. They allow specific facets of cell biology to be studied in isolation, in a simplified environment where control of variables can be achieved without interference from a living and responsive cell. This mini-review outlines the core principles of this approach and surveys recent key investigations that use cell mimics to address a wide range of biological questions. It will also place the field in the context of emerging trends, discuss the associated limitations, and outline future directions of the field. Impact statement Recent years have seen an increasing drive to construct cell mimics and use them as simplified experimental models to replicate and understand biological phenomena in a well-defined and controlled system. By summarizing the advances in this burgeoning field, and using case studies as a basis for discussion on the limitations and future directions of this approach, it is hoped that this minireview will spur others in the experimental biology community to use artificial cells as simplified models with which to probe biological systems.

  17. Computer-oriented approach to fault-tree construction

    International Nuclear Information System (INIS)

    Salem, S.L.; Apostolakis, G.E.; Okrent, D.

    1976-11-01

    A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the Computer Automated Tree (CAT) program, to several systems. A means of representing component behavior by decision tables is presented. The method developed allows the modeling of components with various combinations of electrical, fluid and mechanical inputs and outputs. Each component can have multiple internal failure mechanisms which combine with the states of the inputs to produce the appropriate output states. The generality of this approach allows not only the modeling of hardware, but human actions and interactions as well. A procedure for constructing and editing fault trees, either manually or by computer, is described. The techniques employed result in a complete fault tree, in standard form, suitable for analysis by current computer codes. Methods of describing the system, defining boundary conditions and specifying complex TOP events are developed in order to set up the initial configuration for which the fault tree is to be constructed. The approach used allows rapid modifications of the decision tables and systems to facilitate the analysis and comparison of various refinements and changes in the system configuration and component modeling

  18. Ball Bearing Stiffnesses- A New Approach Offering Analytical Expressions

    Science.gov (United States)

    Guay, Pascal; Frikha, Ahmed

    2015-09-01

    Space mechanisms use preloaded ball bearings in order to withstand the severe vibrations during launch.The launch strength requires the calculation of the bearing stiffness, but this calculation is complex. Nowadays, there is no analytical expression that gives the stiffness of a bearing. Stiffness is computed using an iterative algorithm such as Newton-Raphson, to solve the nonlinear system of equations.This paper aims at offering a simplified analytical approach, based on the assumption that the contact angle is constant. This approach gives analytical formulas of the stiffness of preloaded ball bearing.

  19. Development of a simplified fuel-cladding gap conductance model for nuclear feedback calculation in 16x16 FA

    International Nuclear Information System (INIS)

    Yoo, Jong Sung; Park, Chan Oh; Park, Yong Soo

    1995-01-01

    The accurate determination of the fuel-cladding gap conductance as functions of rod burnup and power level may be a key to the design and safety analysis of a reactor. The incorporation of a sophisticated gap conductance model into nuclear design code for computing thermal hydraulic feedback effect has not been implemented mainly because of computational inefficiency due to complicated behavior of gap conductance. To avoid the time-consuming iteration scheme, simplification of the gap conductance model is done for the current design model. The simplified model considers only the heat conductance contribution to the gap conductance. The simplification is made possible by direct consideration of the gap conductivity depending on the composition of constituent gases in the gap and the fuel-cladding gap size from computer simulation of representative power histories. The simplified gap conductance model is applied to the various fuel power histories and the predicted gap conductances are found to agree well with the results of the design model

  20. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  1. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  2. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  3. Analysis of heat balance on innovative-simplified nuclear power plant using multi-stage steam injectors

    International Nuclear Information System (INIS)

    Goto, Shoji; Ohmori, Shuichi; Mori, Michitsugu

    2006-01-01

    The total space and weight of the feedwater heaters in a nuclear power plant (NPP) can be reduced by replacing low-pressure feedwater heaters with high-efficiency steam injectors (SIs). The SI works as a direct heat exchanger between feedwater from condensers and steam extracted from turbines. It can attain pressures higher than the supplied steam pressure. The maintenance cost is lower than that of the current feedwater heater because of its simplified system without movable parts. In this paper, we explain the observed mechanisms of the SI experimentally and the analysis of the computational fluid dynamics (CFD). We then describe mainly the analysis of the heat balance and plant efficiency of the innovative-simplified NPP, which adapted to the boiling water reactor (BWR) with the high-efficiency SI. The plant efficiencies of this innovative-simplified BWR with SI are compared with those of a 1 100 MWe-class BWR. The SI model is adopted in the heat balance simulator as a simplified model. The results show that the plant efficiencies of the innovate-simplified BWR with SI are almost equal to those of the original BWR. They show that the plant efficiency would be slightly higher if the low-pressure steam, which is extracted from the low-pressure turbine, is used because the first-stage of the SI uses very low pressure. (author)

  4. Towards the next generation of simplified Dark Matter models

    Science.gov (United States)

    Albert, Andreas; Bauer, Martin; Brooke, Jim; Buchmueller, Oliver; Cerdeño, David G.; Citron, Matthew; Davies, Gavin; de Cosa, Annapaola; De Roeck, Albert; De Simone, Andrea; Du Pree, Tristan; Flaecher, Henning; Fairbairn, Malcolm; Ellis, John; Grohsjean, Alexander; Hahn, Kristian; Haisch, Ulrich; Harris, Philip C.; Khoze, Valentin V.; Landsberg, Greg; McCabe, Christopher; Penning, Bjoern; Sanz, Veronica; Schwanenberger, Christian; Scott, Pat; Wardle, Nicholas

    2017-06-01

    This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. It is not intended as a comprehensive review of the discussed subjects, but instead summarises ideas and concepts arising from a brainstorming workshop that can be useful when defining the next generation of simplified DM models (SDMM). In this spirit, based on two concrete examples, we show how existing SDMM can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. In the next part of the White Paper we outline other interesting options for SDMM that could be studied in more detail in the future. Finally, we review important aspects of supersymmetric models for DM and use them to propose how to develop more complete SDMMs. This White Paper is a summary of the brainstorming meeting "Next generation of simplified Dark Matter models" that took place at Imperial College, London on May 6, 2016, and corresponding follow-up studies on selected subjects.

  5. The time-dependent simplified P2 equations: Asymptotic analyses and numerical experiments

    International Nuclear Information System (INIS)

    Shin, U.; Miller, W.F. Jr.

    1998-01-01

    Using an asymptotic expansion, the authors found that the modified time-dependent simplified P 2 (SP 2 ) equations are robust, high-order, asymptotic approximations to the time-dependent transport equation in a physical regime in which the conventional time-dependent diffusion equation is the leading-order approximation. Using diffusion limit analysis, they also asymptotically compared three competitive time-dependent equations (the telegrapher's equation, the time-dependent SP 2 equations, and the time-dependent simplified even-parity equation). As a result, they found that the time-dependent SP 2 equations contain higher-order asymptotic approximations to the time-dependent transport equation than the other competitive equations. The numerical results confirm that, in the vast majority of cases, the time-dependent SP 2 solutions are significantly more accurate than the time-dependent diffusion and the telegrapher's solutions. They have also shown that the time-dependent SP 2 equations have excellent characteristics such as rotational invariance (which means no ray effect), good diffusion limit behavior, guaranteed positivity in diffusive regimes, and significant accuracy, even in deep-penetration problems. Through computer-running-time tests, they have shown that the time-dependent SP 2 equations can be solved with significantly less computational effort than the conventionally used, time-dependent S N equations (for N > 2) and almost as fast as the time-dependent diffusion equation. From all these results, they conclude that the time-dependent SP 2 equations should be considered as an important competitor for an improved approximately transport equations solver. Such computationally efficient time-dependent transport models are important for problems requiring enhanced computational efficiency, such as neutronics/fluid-dynamics coupled problems that arise in the analyses of hypothetical nuclear reactor accidents

  6. Commercializing the next generation: the AP600 advanced simplified nuclear power plant

    International Nuclear Information System (INIS)

    Bruschi, H.J.

    1994-01-01

    Today, government and industry are working together on advanced nuclear power plant designs that take advantage of valuable lessons learned from the experience to date and promise to reconcile the demands of economic expansion with the laws of environmental protection. In the U.S., the Department of Energy (DOE) and the Electric Power Research Institute (EPRI) initiated a design certification program in 1989 to develop and commercialize advanced light water reactors (ALWRs) for the next round of power plant construction. Advanced, simplified technology is one approach under development to end the industry's search for a simpler, more forgiving, and less costly reactor. As part of this program, Westinghouse is developing the AP600, a new standard 600 MWe advanced, simplified plant. The design strikes a balance between the use of proven technology and new approaches. The result is a greatly streamlined plant that can meet safety regulations and reliability requirements, be economically competitive, and promote broader public confidence in nuclear energy. 1 fig

  7. Washington's marine oil spill compensation schedule - simplified resource damage assessment

    International Nuclear Information System (INIS)

    Geselbracht, L.; Logan, R.

    1993-01-01

    The Washington State Preassessment Screening and Oil Spill Compensation Schedule Rule (Chapter 173-183 Washington Administrative Code), which simplifies natural resource damage assessment for many oil spill cases, became effective in May 1992. The approach described in the rule incorporates a number of preconstructed rankings that rate environmental sensitivity and the propensity of spilled oil to cause environmental harm. The rule also provides guidance regarding how damages calculated under the schedule should be reduced to take into account actions taken by the responsible party that reduce environmental injury. To apply the compensation schedule to marine estuarine spills, the resource trustees need only collect a limited amount of information such as type of product spilled, number of gallons spilled, compensation schedule subregions the spill entered, season of greatest spill impact, percent coverage of habitats affected by the spill, and actions taken by the responsible party. The result of adding a simplified tool to the existing assortment of damage assessment approaches is that resource trustees will now be able to assess damages for most oil spill cases and shift more effort than was possible in the past to resource restoration

  8. Simplified hearing protector ratings—an international comparison

    Science.gov (United States)

    Waugh, R.

    1984-03-01

    A computer was programmed to model the distributions of dB(A) levels reaching the ears of an imaginary workforce wearing hearing protectors selected on the basis of either octave band attenuation values or various simplified ratings in use in Australia, Germany, Poland, Spain or the U.S.A. Both multi-valued and single-valued versions of dB(A) reduction and sound level conversion ratings were considered. Ratings were compared in terms of precision and protection rate and the comparisons were replicated for different samples of noise spectra ( N = 400) and hearing protectors ( N = 70) to establish the generality of the conclusions. Different countries adopt different approaches to the measurement of octave band attenuation values and the consequences of these differences were investigated. All rating systems have built-in correction factors to account for hearing protector performance variability and the merits of these were determined in the light of their ultimate effects on the distribution of dB(A) levels reaching wearers' ears. It was concluded that the optimum rating is one that enables the dB(A) level reaching wearers to be estimated by subtracting a single rating value from the dB(C) level of the noise environment, the rating value to be determined for a pink noise spectrum from mean minus one standard deviation octave band attenuation values with further protection rate adjustments being achieved by the use of a constant correction factor.

  9. Using the gauge condition to simplify the elastodynamic analysis of guided wave propagation

    Directory of Open Access Journals (Sweden)

    Md Yeasin BHUIYAN

    2016-09-01

    Full Text Available In this article, gauge condition in elastodynamics is explored more to revive its potential capability of simplifying wave propagation problems in elastic medium. The inception of gauge condition in elastodynamics happens from the Navier-Lame equations upon application of Helmholtz theorem. In order to solve the elastic wave problems by potential function approach, the gauge condition provides the necessary conditions for the potential functions. The gauge condition may be considered as the superposition of the separate gauge conditions of Lamb waves and shear horizontal (SH guided waves respectively, and thus, it may be resolved into corresponding gauges of Lamb waves and SH waves. The manipulation and proper choice of the gauge condition does not violate the classical solutions of elastic waves in plates; rather, it simplifies the problems. The gauge condition allows to obtain the analytical solution of complicated problems in a simplified manner.

  10. International piping benchmarks: Use of simplified code PACE 2

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, J; Spence, J [University of Strathclyde (United Kingdom); Blundell, C [Risley Nuclear Power Development Establishment, Central Technical Services, Risley, Warrington (United Kingdom)

    1979-06-01

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost. (author)

  11. International piping benchmarks: Use of simplified code PACE 2

    International Nuclear Information System (INIS)

    Boyle, J.; Spence, J.; Blundell, C.

    1979-01-01

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost. (author)

  12. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  13. Simplified method for measuring the response time of scram release electromagnet in a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Patri, Sudheer, E-mail: patri@igcar.gov.in; Mohana, M.; Kameswari, K.; Kumar, S. Suresh; Narmadha, S.; Vijayshree, R.; Meikandamurthy, C.; Venkatesan, A.; Palanisami, K.; Murthy, D. Thirugnana; Babu, B.; Prakash, V.; Rajan, K.K.

    2015-04-15

    Highlights: • An alternative method for estimating the electromagnet clutch release time. • A systematic approach to develop a computer based measuring system. • Prototype tests on the measurement system. • Accuracy of the method is ±6% and repeatability error is within 2%. - Abstract: The delay time in electromagnet clutch release during a reactor trip (scram action) is an important safety parameter, having a bearing on the plant safety during various design basis events. Generally, it is measured using current decay characteristics of electromagnet coil and its energising circuit. A simplified method of measuring the same in a Sodium cooled fast reactors (SFR) is proposed in this paper. The method utilises the position data of control rod to estimate the delay time in electromagnet clutch release. A computer based real time measurement system for measuring the electromagnet clutch delay time is developed and qualified for retrofitting in prototype fast breeder reactor. Various stages involved in the development of the system are principle demonstration, experimental verification of hardware capabilities and prototype system testing. Tests on prototype system have demonstrated the satisfactory performance of the system with intended accuracy and repeatability.

  14. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  15. A dynamical-systems approach for computing ice-affected streamflow

    Science.gov (United States)

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  16. GeoDataspaces: Simplifying Data Management Tasks with Globus

    Science.gov (United States)

    Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.

    2014-12-01

    Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to

  17. Use of simplified methods for predicting natural resource damages

    International Nuclear Information System (INIS)

    Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.

    1995-01-01

    To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed

  18. Application of a simplified definition of diastolic function in severe sepsis and septic shock.

    Science.gov (United States)

    Lanspa, Michael J; Gutsche, Andrea R; Wilson, Emily L; Olsen, Troy D; Hirshberg, Eliotte L; Knox, Daniel B; Brown, Samuel M; Grissom, Colin K

    2016-08-04

    Left ventricular diastolic dysfunction is common in patients with severe sepsis or septic shock, but the best approach to categorization is unknown. We assessed the association of common measures of diastolic function with clinical outcomes and tested the utility of a simplified definition of diastolic dysfunction against the American Society of Echocardiography (ASE) 2009 definition. In this prospective observational study, patients with severe sepsis or septic shock underwent transthoracic echocardiography within 24 h of onset of sepsis (median 4.3 h). We measured echocardiographic parameters of diastolic function and used random forest analysis to assess their association with clinical outcomes (28-day mortality and ICU-free days to day 28) and thereby suggest a simplified definition. We then compared patients categorized by the ASE 2009 definition and our simplified definition. We studied 167 patients. The ASE 2009 definition categorized only 35 % of patients. Random forest analysis demonstrated that the left atrial volume index and deceleration time, central to the ASE 2009 definition, were not associated with clinical outcomes. Our simplified definition used only e' and E/e', omitting the other measurements. The simplified definition categorized 87 % of patients. Patients categorized by either ASE 2009 or our novel definition had similar clinical outcomes. In both definitions, worsened diastolic function was associated with increased prevalence of ischemic heart disease, diabetes, and hypertension. A novel, simplified definition of diastolic dysfunction categorized more patients with sepsis than ASE 2009 definition. Patients categorized according to the simplified definition did not differ from patients categorized according to the ASE 2009 definition in respect to clinical outcome or comorbidities.

  19. Simplified quantification of nicotinic receptors with 2[18F]F-A-85380 PET

    International Nuclear Information System (INIS)

    Mitkovski, Sascha; Villemagne, Victor L.; Novakovic, Kathy E.; O'Keefe, Graeme; Tochon-Danguy, Henri; Mulligan, Rachel S.; Dickinson, Kerryn L.; Saunder, Tim; Gregoire, Marie-Claude; Bottlaender, Michel; Dolle, Frederic; Rowe, Christopher C.

    2005-01-01

    Introduction: Neuronal nicotinic acetylcholine receptors (nAChRs), widely distributed in the human brain, are implicated in various neurophysiological processes as well as being particularly affected in neurodegenerative conditions such as Alzheimer's disease. We sought to evaluate a minimally invasive method for quantification of nAChR distribution in the normal human brain, suitable for routine clinical application, using 2[ 18 F]F-A-85380 and positron emission tomography (PET). Methods: Ten normal volunteers (four females and six males, aged 63.40±9.22 years) underwent a dynamic 120-min PET scan after injection of 226 MBq 2[ 18 F]F-A-85380 along with arterial blood sampling. Regional binding was assessed through standardized uptake value (SUV) and distribution volumes (DV) obtained using both compartmental (DV 2CM ) and graphical analysis (DV Logan ). A simplified approach to the estimation of DV (DV simplified ), defined as the region-to-plasma ratio at apparent steady state (90-120 min post injection), was compared with the other quantification approaches. Results: DV Logan values were higher than DV 2CM . A strong correlation was observed between DV simplified , DV Logan (r=.94) and DV 2CM (r=.90) in cortical regions, with lower correlations in thalamus (r=.71 and .82, respectively). Standardized uptake value showed low correlation against DV Logan and DV 2CM . Conclusion: DV simplified determined by the ratio of tissue to metabolite-corrected plasma using a single 90- to 120-min PET acquisition appears acceptable for quantification of cortical nAChR binding with 2[ 18 F]F-A-85380 and suitable for clinical application

  20. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  1. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  2. Development of simplified 1D and 2D models for studying a PWR lower head failure under severe accident conditions

    International Nuclear Information System (INIS)

    Koundy, V.; Dupas, J.; Bonneville, H.; Cormeau, I.

    2005-01-01

    In the study of severe accidents of nuclear pressurized water reactors, the scenarios that describe the relocation of significant quantities of liquid corium at the bottom of the lower head are investigated from the mechanical point of view. In these scenarios, the risk of a breach and the possibility of a large quantity of corium being released from the lower head exist. This may lead to direct heating of the containment or outer vessel steam explosion. These issues are important due to their early containment failure potential. Since the TMI-2 accident, many theoretical and experimental investigations, relating to lower head mechanical behaviour under severe thermo-mechanical loading in the event of a core meltdown accident have been performed. IRSN participated actively in the one-fifth scale USNRC/SNL LHF and OECD LHF (OLHF) programs. Within the framework of these programs, two simplified models were developed by IRSN: the first is a simplified 1D approach based on the theory of pressurized spherical shells and the second is a simplified 2D model based on the theory of shells of revolution under symmetric loading. The mathematical formulation of both models and the creep constitutive equations used are presented in detail in this paper. The corresponding models were used to interpret some of the OLHF program experiments and the calculation results were quite consistent with the experimental data. The two simplified models have been used to simulate the thermo-mechanical behaviour of a 900 MWe pressurized water reactor lower head under severe accident conditions leading to failure. The average transient heat flux produced by the corium relocated at the bottom of the lower head has been determined using the IRSN HARAR code. Two different methods, both taking into account the ablation of the internal surface, are used to determine the temperature profiles across the lower head wall and their effect on the time to failure is discussed. Using these simplified models

  3. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  4. Simplified approaches for the numerical simulation of welding processes with filler material

    Energy Technology Data Exchange (ETDEWEB)

    Carmignani, B.; Toselli, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy)

    2001-07-01

    Due to the very high computation times, required by the methodologies pointed out during the studies carried out at ENEA-Bologna concerning the numerical simulations of welds with filler material of steel pieces of high thickness (studies presented also at the 12. and 13. International ABAQUS Users' Conferences), new simplified methodologies have been proposed and applied to an experimental model of significant dimensions. (These studies are of interest in the nuclear field for the construction of the toroidal field coil case, TFCC, for the international thermonuclear experimental reactor, ITER machine). In this paper these new methodologies are presented together the obtained results, which have been compared, successfully, with the ones obtained by the use of the previous numerical methodologies considered and also with the corresponding experimental measures. These new calculation techniques are in course of application for the simulation of welds of pieces constituting a real component of ITER TF coil case. [Italian] A causa dei tempi di calcolo molto elevati richiesti dalle metodologie individuate e messe a punto durante gli studi eseguiti in ENEA-Bologna riguardanti le simulazioni numeriche di saldature, con apporto di materiale, di pezzi di acciaio di grande spessore (studi presentati anche alle precedenti Conferenze Utenti ABAQUS, 12{sup 0} e 13{sup 0} ABAQUS Users' Conferences), sono state cercate e proposte nuove metodologie semplificate, che sono state poi applicate ad un modello sperimentale di dimensioni significative. (Si ricorda che questi studi sono di interesse nel campo nucleare per la costruzione delle casse per contenere le bobine che daranno luogo al campo magnetico della macchina ITER, reattore internazione sperimentale termonucleare). Nel lavoro qui presentato sono descritte queste nuove metodologie e sono riportati i risultati ottenuti dalla loro applicazione unitamente ai confronti (abbastanza soddisfacenti) con i risultati

  5. Computational biomechanics for medicine new approaches and new applications

    CERN Document Server

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  6. Simplifying modeling of nanoparticle aggregation-sedimentation behavior in environmental systems: A theoretical analysis

    NARCIS (Netherlands)

    Quik, J.T.K.; Meent, van de D.; Koelmans, A.A.

    2014-01-01

    Parameters and simplified model approaches for describing the fate of engineered nanoparticles (ENPs) are crucial to advance the risk assessment of these materials. Sedimentation behavior of ENPs in natural waters has been shown to follow apparent first order behavior, a ‘black box’ phenomenon that

  7. The Harmonic Oscillator–A Simplified Approach

    Directory of Open Access Journals (Sweden)

    L. R. Ganesan

    2008-01-01

    Full Text Available Among the early problems in quantum chemistry, the one dimensional harmonic oscillator problem is an important one, providing a valuable exercise in the study of quantum mechanical methods. There are several approaches to this problem, the time honoured infinite series method, the ladder operator method etc. A method which is much shorter, mathematically simpler is presented here.

  8. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.

    Science.gov (United States)

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  9. Computational split-field finite-difference time-domain evaluation of simplified tilt-angle models for parallel-aligned liquid-crystal devices

    Science.gov (United States)

    Márquez, Andrés; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Álvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2018-03-01

    Simplified analytical models with predictive capability enable simpler and faster optimization of the performance in applications of complex photonic devices. We recently demonstrated the most simplified analytical model still showing predictive capability for parallel-aligned liquid crystal on silicon (PA-LCoS) devices, which provides the voltage-dependent retardance for a very wide range of incidence angles and any wavelength in the visible. We further show that the proposed model is not only phenomenological but also physically meaningful, since two of its parameters provide the correct values for important internal properties of these devices related to the birefringence, cell gap, and director profile. Therefore, the proposed model can be used as a means to inspect internal physical properties of the cell. As an innovation, we also show the applicability of the split-field finite-difference time-domain (SF-FDTD) technique for phase-shift and retardance evaluation of PA-LCoS devices under oblique incidence. As a simplified model for PA-LCoS devices, we also consider the exact description of homogeneous birefringent slabs. However, we show that, despite its higher degree of simplification, the proposed model is more robust, providing unambiguous and physically meaningful solutions when fitting its parameters.

  10. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    International Nuclear Information System (INIS)

    Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.

    2015-01-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  11. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    Energy Technology Data Exchange (ETDEWEB)

    Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F. [Electrical and Information Engineering Department (DEI), Polytechnic Institute of Bari, 4 Orabona Street, CAP 70125, Bari, (Italy); Dimiccoli, V.; Losito, O.; Prisco, R. [ITEL Telecomunicazioni, 39 Labriola Street, CAP 70037, Ruvo di Puglia, Bari, (Italy)

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  12. Simplified Multi-Stage and Per Capita Convergence: an analysis of two climate regimes for differentiation of commitments

    NARCIS (Netherlands)

    Elzen MGJ den; Berk MM; Lucas P; KMD

    2004-01-01

    This report describes and analyses in detail two climate regimes for differentiating commitments: the simplified Multi-Stage and Per Capita Convergence approaches. The Multi-Stage approach consists of a system to divide countries into groups with different types of commitments (stages). The Per

  13. Investigation of the potential of fuzzy sets and related approaches for treating uncertainties in radionuclide transfer predictions

    International Nuclear Information System (INIS)

    Shaw, W.; Grindrod, P.

    1989-01-01

    This document encompasses two main items. The first consists of a review of four aspects of fuzzy sets, namely, the general framework, the role of expert judgment, mathematical and computational aspects, and present applications. The second consists of the application of fuzzy-set theory to simplified problems in radionuclide migration, with comparisons between fuzzy and probabilistic approaches, treated both analytically and computationally. A new approach to fuzzy differential equations is presented, and applied to simple ordinary and partial differential equations. It is argued that such fuzzy techniques represent a viable alternative to probabilistic risk assessment, for handling systems subject to uncertainties

  14. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    Science.gov (United States)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  15. Two-stage simplified swarm optimization for the redundancy allocation problem in a multi-state bridge system

    International Nuclear Information System (INIS)

    Lai, Chyh-Ming; Yeh, Wei-Chang

    2016-01-01

    The redundancy allocation problem involves configuring an optimal system structure with high reliability and low cost, either by alternating the elements with more reliable elements and/or by forming them redundantly. The multi-state bridge system is a special redundancy allocation problem and is commonly used in various engineering systems for load balancing and control. Traditional methods for redundancy allocation problem cannot solve multi-state bridge systems efficiently because it is impossible to transfer and reduce a multi-state bridge system to series and parallel combinations. Hence, a swarm-based approach called two-stage simplified swarm optimization is proposed in this work to effectively and efficiently solve the redundancy allocation problem in a multi-state bridge system. For validating the proposed method, two experiments are implemented. The computational results indicate the advantages of the proposed method in terms of solution quality and computational efficiency. - Highlights: • Propose two-stage SSO (SSO_T_S) to deal with RAP in multi-state bridge system. • Dynamic upper bound enhances the efficiency of searching near-optimal solution. • Vector-update stages reduces the problem dimensions. • Statistical results indicate SSO_T_S is robust both in solution quality and runtime.

  16. A computational approach to chemical etiologies of diabetes

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe

    2013-01-01

    Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic...... linkages in type 2 diabetes (T2D) through genome-wide associations, disease similarities, and published empirical evidence. Ten environmental chemicals were found to be potentially linked to T2D, the highest scores were observed for arsenic, 2,3,7,8-tetrachlorodibenzo-p-dioxin, hexachlorobenzene...

  17. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    Science.gov (United States)

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  18. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  19. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  20. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  1. Computer based approach to fatigue analysis and design

    International Nuclear Information System (INIS)

    Comstock, T.R.; Bernard, T.; Nieb, J.

    1979-01-01

    An approach is presented which uses a mini-computer based system for data acquisition, analysis and graphic displays relative to fatigue life estimation and design. Procedures are developed for identifying an eliminating damaging events due to overall duty cycle, forced vibration and structural dynamic characteristics. Two case histories, weld failures in heavy vehicles and low cycle fan blade failures, are discussed to illustrate the overall approach. (orig.) 891 RW/orig. 892 RKD [de

  2. Detached eddy simulation of cyclic large scale fluctuations in a simplified engine setup

    International Nuclear Information System (INIS)

    Hasse, Christian; Sohm, Volker; Durst, Bodo

    2009-01-01

    Computational Fluid Dynamics using RANS-based modelling approaches have become an important tool in the internal combustion engine development and optimization process. However, these models cannot resolve cycle to cycle variations, which are an important aspect in the design of new combustion systems. In this study the feasibility of using a Detached Eddy Simulation (DES) SST model, which is a hybrid RANS/LES model, to predict cycle to cycle variations is investigated. In the near wall region or in regions where the grid resolution is not sufficiently fine to resolve smaller structures, the two-equation RANS SST model is used. In the other regions with higher grid resolution an LES model is applied. The case considered is a geometrically simplified engine, for which detailed experimental data for the ensemble averaged and single cycle velocity field are available from Boree et al. [Boree, J., Maurel, S., Bazile, R., 2002. Disruption of a compressed vortex, Physics of Fluids 14 (7), 2543-2556]. The fluid flow shows a strong tumbling motion, which is a major characteristic for modern turbo-charged, direct-injection gasoline engines. The general flow structure is analyzed first and the extent of the LES region and the amount of resolved fluctuations are discussed. Multiple consecutive cycles are computed and turbulent statistics of DES SST, URANS and the measured velocity field are compared for different piston positions. Cycle to cycle variations of the velocity field are analyzed for both computation and experiment with a special emphasis on the useability of the DES SST model to predict cyclic variations

  3. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    Directory of Open Access Journals (Sweden)

    G. P. Berman

    2003-01-01

    Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.

  4. A simplified technique for shakedown limit load determination

    International Nuclear Information System (INIS)

    Abdalla, Hany F.; Megahed, Mohammad M.; Younan, Maher Y.A.

    2007-01-01

    In this paper, a simplified technique is presented to determine the shakedown limit load of a structure using the finite element method. The simplified technique determines the shakedown limit load without performing lengthy time consuming full elastic-plastic cyclic loading simulations or conventional iterative elastic techniques. Instead, the shakedown limit load is determined by performing two analyses namely: an elastic analysis and an elastic-plastic analysis. By extracting the results of the two analyses, the shakedown limit load is determined through the calculation of the residual stresses developed within the structure. The simplified technique is applied and verified using two bench mark shakedown problems namely: the two-bar structure subjected to constant axial force and cyclic thermal loading, and the Bree cylinder subjected to constant internal pressure and cyclic high temperature variation across its wall. The results of the simplified technique showed very good correlation with the, analytically determined, Bree diagrams of both structures. In order to gain confidence in the simplified technique, the shakedown limit loads output by the simplified technique are used to perform full elastic-plastic cyclic loading simulations to check for shakedown behavior of both structures

  5. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  6. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  7. Simplified Approach to Predicting Rough Surface Transition

    Science.gov (United States)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  8. Simplified Physics Based Models Research Topical Report on Task #2

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Ganesh, Priya

    2014-10-31

    We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.

  9. S-Channel Dark Matter Simplified Models and Unitarity

    CERN Document Server

    Englert, Christoph; Spannowsky, Michael

    The ultraviolet structure of $s$-channel mediator dark matter simplified models at hadron colliders is considered. In terms of commonly studied $s$-channel mediator simplified models it is argued that at arbitrarily high energies the perturbative description of dark matter production in high energy scattering at hadron colliders will break down in a number of cases. This is analogous to the well documented breakdown of an EFT description of dark matter collider production. With this in mind, to diagnose whether or not the use of simplified models at the LHC is valid, perturbative unitarity of the scattering amplitude in the processes relevant to LHC dark matter searches is studied. The results are as one would expect: at the LHC and future proton colliders the simplified model descriptions of dark matter production are in general valid. As a result of the general discussion, a simple new class of previously unconsidered `Fermiophobic Scalar' simplified models is proposed, in which a scalar mediator couples to...

  10. Cloud Computing - A Unified Approach for Surveillance Issues

    Science.gov (United States)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  11. A simplified model of the source channel of the Leksell GammaKnife (registered) tested with PENELOPE

    International Nuclear Information System (INIS)

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-01-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife (registered) . The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 deg. with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between ρ = (x 2 + y 2 ) 1/2 and their polar angle θ, on one side, and between tan -1 (y/x) and their azimuthal angle φ, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time

  12. A simplified model of the source channel of the Leksell GammaKnife (registered) tested with PENELOPE

    Energy Technology Data Exchange (ETDEWEB)

    Al-Dweri, Feras M O [Departamento de FIsica Moderna, Universidad de Granada, E-18071 Granada (Spain); Lallena, Antonio M [Departamento de FIsica Moderna, Universidad de Granada, E-18071 Granada (Spain); Vilches, Manuel [Servicio de RadiofIsica, Hospital ClInico ' San Cecilio' , Avda. Dr Oloriz, 16, E-18012 Granada (Spain)

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife (registered) . The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 deg. with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between {rho} = (x{sup 2} + y{sup 2}){sup 1/2} and their polar angle {theta}, on one side, and between tan{sup -1}(y/x) and their azimuthal angle {phi}, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  13. A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE

    Science.gov (United States)

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-06-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  14. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  15. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    International Nuclear Information System (INIS)

    Grimme, Stefan; Bannwarth, Christoph

    2016-01-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  16. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    Energy Technology Data Exchange (ETDEWEB)

    Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph [Mulliken Center for Theoretical Chemistry, Institut für Physikalische und Theoretische Chemie, Rheinische Friedrich-Wilhelms Universität Bonn, Beringstraße 4, 53115 Bonn (Germany)

    2016-08-07

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  17. Q-P Wave traveltime computation by an iterative approach

    KAUST Repository

    Ma, Xuxin; Alkhalifah, Tariq Ali

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  18. Promises and Pitfalls of Computer-Supported Mindfulness: Exploring a Situated Mobile Approach

    Directory of Open Access Journals (Sweden)

    Ralph Vacca

    2017-12-01

    Full Text Available Computer-supported mindfulness (CSM is a burgeoning area filled with varied approaches such as mobile apps and EEG headbands. However, many of the approaches focus on providing meditation guidance. The ubiquity of mobile devices may provide new opportunities to support mindfulness practices that are more situated in everyday life. In this paper, a new situated mindfulness approach is explored through a specific mobile app design. Through an experimental design, the approach is compared to traditional audio-based mindfulness meditation, and a mind wandering control, over a one-week period. The study demonstrates the viability for a situated mobile mindfulness approach to induce mindfulness states. However, phenomenological aspects of the situated mobile approach suggest both promises and pitfalls for computer-supported mindfulness using a situated approach.

  19. Simplified method for elastic plastic analysis of material presenting bilinear kinematic hardening

    International Nuclear Information System (INIS)

    Roche, R.

    1983-12-01

    A simplified method for elastic plastic analysis is presented. Material behavior is assumed to be elastic plastic with bilinear kinematic hardening. The proposed method give a strain-stress field fullfilling material constitutive equations, equations of equilibrium and continuity conditions. This strain-stress is obtained through two linear computations. The first one is the conventional elastic analysis of the body submitted to the applied load. The second one use tangent matrix (tangent Young's modulus and Poisson's ratio) for the determination of an additional stress due to imposed initial strain. Such a method suits finite elements computer codes, the most useful result being plastic strains resulting from the applied loading (load control or deformation control). Obviously, there is not unique solution, for stress-strain field is not depending only of the applied load, but of the load history. Therefore, less pessimistic solutions can be got by one or two additional linear computations [fr

  20. Recommended programming practices to facilitate the portability of science computer programs

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    This standard recommends programming practices to facilitate the portability of computer programs prepared for scientific and engineering computations. These practices are intended to simplify implementation, conversion, and modification of computer programs

  1. Endodontics Simplified

    OpenAIRE

    Kansal, Rohit; Talwar, Sangeeta; Yadav, Seema; Chaudhary, Sarika; Nawal, Ruchika

    2014-01-01

    The preparation of the root canal system is essential for a successful outcome in root canal treatment. The development of rotary nickel titanium instruments is considered to be an important innovation in the field of endodontics. During few last years, several new instrument systems have been introduced but the quest for simplifying the endodontic instrumentation sequence has been ongoing for almost 20 years, resulting in more than 70 different engine-driven endodontic instrumentation system...

  2. Interacting electrons theory and computational approaches

    CERN Document Server

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  3. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...

  4. Digging deeper on "deep" learning: A computational ecology approach.

    Science.gov (United States)

    Buscema, Massimo; Sacco, Pier Luigi

    2017-01-01

    We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.

  5. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    Directory of Open Access Journals (Sweden)

    Perrin H. Beatty

    2016-10-01

    Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.

  6. A Simplified Analytical Technique for High Frequency Characterization of Resonant Tunneling Diode

    Directory of Open Access Journals (Sweden)

    DESSOUKI, A. A. S.

    2014-11-01

    Full Text Available his paper proposes a simplified analytical technique for high frequency characterization of the resonant tunneling diode (RTD. An equivalent circuit of the RTD that consists of a parallel combination of conductance, G (V, f, and capacitance, C (V, f is formulated. The proposed approach uses the measured DC current versus voltage characteristic of the RTD to extract the equivalent circuit elements parameters in the entire bias range. Using the proposed analytical technique, the frequency response - including the high frequency range - of many characteristic aspects of the RTD is investigated. Also, the maximum oscillation frequency of the RTD is calculated. The results obtained have been compared with those concluded and reported in the literature. The reported results in literature were obtained through simulation of the RTD at high frequency using either a computationally complicated quantum simulator or through difficult RF measurements. A similar pattern of results and highly concordant conclusion are obtained. The proposed analytical technique is simple, correct, and appropriate to investigate the behavior of the RTD at high frequency. In addition, the proposed technique can be easily incorporated into SPICE program to simulate circuits containing RTD.

  7. Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study

    International Nuclear Information System (INIS)

    Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki; Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun; Kosaka, Naoki; Kon, Yujiro

    2011-01-01

    To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)

  8. Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki [Hokkaido University Hospital, Department of Radiology, Sapporo City (Japan); Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun [Hokkaido Medical Center for Rheumatic Diseases, Sapporo City, Hokkaido (Japan); Kosaka, Naoki [Tokeidai Memorial Hospital, Sapporo City, Hokkaido (Japan); Kon, Yujiro [St. Thomas' Hospital, Lupus Research Unit, The Rayne Institute, London (United Kingdom)

    2011-01-15

    To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)

  9. Technology Solutions Case Study: Combustion Safety Simplified Test Protocol

    Energy Technology Data Exchange (ETDEWEB)

    L. Brand, D. Cautley, D. Bohac, P. Francisco, L. Shen, and S. Gloss

    2015-12-01

    Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives.

  10. A non overlapping parallel domain decomposition method applied to the simplified transport equations

    International Nuclear Information System (INIS)

    Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.

    2009-01-01

    A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)

  11. CFD modelling approaches against single wind turbine wake measurements using RANS

    International Nuclear Information System (INIS)

    Stergiannis, N; Lacor, C; Beeck, J V; Donnelly, R

    2016-01-01

    Numerical simulations of two wind turbine generators including the exact geometry of their blades and hub are compared against a simplified actuator disk model (ADM). The wake expansion of the upstream rotor is investigated and compared with measurements. Computational Fluid Dynamics (CFD) simulations have been performed using the open-source platform OpenFOAM [1]. The multiple reference frame (MRF) approach was used to model the inner rotating reference frames in a stationary computational mesh and outer reference frame for the full wind turbine rotor simulations. The standard k — ε and k — ω turbulence closure schemes have been used to solve the steady state, three dimensional Reynolds Averaged Navier- Stokes (RANS) equations. Results of near and far wake regions are compared with wind tunnel measurements along three horizontal lines downstream. The ADM under-predicted the velocity deficit at the wake for both turbulence models. Full wind turbine rotor simulations showed good agreement against the experimental data at the near wake, amplifying the differences between the simplified models. (paper)

  12. Robotic nephroureterectomy: a simplified approach requiring no patient repositioning or robot redocking.

    Science.gov (United States)

    Zargar, Homayoun; Krishnan, Jayram; Autorino, Riccardo; Akca, Oktay; Brandao, Luis Felipe; Laydner, Humberto; Samarasekera, Dinesh; Ko, Oliver; Haber, Georges-Pascal; Kaouk, Jihad H; Stein, Robert J

    2014-10-01

    Robotic technology is increasingly adopted in urologic surgery and a variety of techniques has been described for minimally invasive treatment of upper tract urothelial cancer (UTUC). To describe a simplified surgical technique of robot-assisted nephroureterectomy (RANU) and to report our single-center surgical outcomes. Patients with history of UTUC treated with this modality between April 2010 and August 2013 were included in the analysis. Institutional review board approval was obtained. Informed consent was signed by all patients. A simplified single-step RANU not requiring repositioning or robot redocking. Lymph node dissection was performed selectively. Descriptive analysis of patients' characteristics, perioperative outcomes, histopathology, and short-term follow-up data was performed. The analysis included 31 patients (mean age: 72.4±10.6 yr; mean body mass index: 26.6±5.1kg/m(2)). Twenty-six of 30 tumors (86%) were high grade. Mean tumor size was 3.1±1.8cm. Of the 31 patients, 13 (42%) had pT3 stage disease. One periureteric positive margin was noted in a patient with bulky T3 disease. The mean number of lymph nodes removed was 9.4 (standard deviation: 5.6; range: 3-21). Two of 14 patients (14%) had positive lymph nodes on final histology. No patients required a blood transfusion. Six patients experienced complications postoperatively, with only one being a high grade (Clavien 3b) complication. Median hospital stay was 5 d. Within the follow-up period, seven patients experienced bladder recurrences and four patients developed metastatic disease. Our RANU technique eliminates the need for patient repositioning or robot redocking. This technique can be safely reproduced, with surgical outcomes comparable to other established techniques. We describe a surgical technique using the da Vinci robot for a minimally invasive treatment of patients presenting with upper tract urothelial cancer. This technique can be safely implemented with good surgical outcomes

  13. Computational fluid dynamic applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  14. Simplified fuel cycle tritium inventory model for systems studies -- An illustrative example with an optimized cryopump exhaust system

    International Nuclear Information System (INIS)

    Kuan, W.; Ho, S.K.

    1995-01-01

    It is desirable to incorporate safety constraints due to fuel cycle tritium inventories into tokamak reactor design optimization. An optimal scenario to minimize tritium inventories without much degradation of plasma performance can be defined for each tritium processing component. In this work, the computer code TRUFFLES is used exclusively to obtain numerical data for a simplified model to be used for systems studies. As an illustration, the cryopump plasma exhaust subsystem is examined in detail for optimization purposes. This optimization procedure will then be used to further reduce its window of operation and provide constraints on the data used for the simplified tritium inventory model

  15. A simplified geometrical model for transient corium propagation in core for LWR with heavy reflector - 15271

    International Nuclear Information System (INIS)

    Saas, L.; Le Tellier, R.; Bajard, S.

    2015-01-01

    In this document, we present a simplified geometrical model (0D model) for both the in-core corium propagation transient and the characterization of the mode of corium transfer from the core to the vessel. A degraded core with a formed corium pool is used as an initial state. This initial state can be obtained from a simulation computed with an integral code. This model does not use a grid for the core as integral codes do. Geometrical shapes and 0D models are associated with the corium pool and the other components of the degraded core (debris, heavy reflector, core plate...). During the transient, these shapes evolve taking into account the thermal and stratification behavior of the corium pool and the melting of the core surrounding components. Some results corresponding to the corium pool propagation in core transients obtained with this model on a LWR with a heavy reflector are given and compared to grid approach of the integral codes MAAP4

  16. Simplified analysis of trasients in pool type liquid metal reactors

    International Nuclear Information System (INIS)

    Botelho, D.A.

    1987-01-01

    The conceptual design of a liquid metal fast breeder reactor will require a great effort of development in several technical disciplines. One of them is the thermal-hydraulic design of the reactor and of the heat and fluid transport components inside the reactor vessel. A simplified model to calculate the maximum sodium temperatures is presented in this paper. This model can be used to optimize the layout of components inside the reactor vessel and was easily programmed in a small computer. Illustrative calculations of two transients of a typical hot pool type fast reactor are presented and compared with the results of other researchers. (author) [pt

  17. A simplified model of choice behavior under uncertainty

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lin

    2016-08-01

    Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.

  18. A survey on computational intelligence approaches for predictive modeling in prostate cancer

    OpenAIRE

    Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG

    2017-01-01

    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...

  19. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    Science.gov (United States)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  20. New approach for simplified and automated measurement of left ventricular ejection fraction by ECG gated blood pool scintigraphy

    Energy Technology Data Exchange (ETDEWEB)

    Inagaki, Suetsugu; Adachi, Haruhiko; Sugihara, Hiroki; Katsume, Hiroshi; Ijichi, Hamao; Okamoto, Kunio; Hosoba, Minoru

    1984-12-01

    Background (BKG) correction is important but debatable in the measurement of Left ventricular ejection fraction (LVEF) with ECG gated blood pool scintigraphy. We devised a new simplified BKG processing (fixed BKG method) without BKG region-of-interest (ROI) assignment, and the accuracy and reproducibility were assessed in 25 patients with various heart diseases and 5 normal subjects by comparison with LVEF obtained by contrast levolgraphy (LVG-EF). Four additional protocols for LVEF measurement with BKG-ROI assignment were also assessed for reference. LVEF calculated using the fixed BKG ratio of 0.64 (BKG count rates were 64%) of end-diastolic count rates of LV) with ''Fixed'' LV-ROI was best correlated with LVG-EF (r = 0.936, p < 0.001) and most approximated (Fixed BKG ratio method EF: 61.1 +- 20.1, LVG-EF: 61.2 +- 20.4% (mean +- SD)) among other protocols. The wide availability of the fixed value of 0.64 was tested in various diseases, body size and end-diastolic volume by LVG, and the results were to be little influenced by them. Furthermore, fixed BKG method produced lower inter-and intra- observer variability than other protocols requiring BKG-ROI assignment, probably due to its simplified processing. In conclusion, fixed BKG ratio method simplifies the measurement of LVEF, and is feasible for automated processing and single probe system.

  1. 48 CFR 1552.232-74 - Payments-simplified acquisition procedures financing.

    Science.gov (United States)

    2010-10-01

    ... acquisition procedures financing. 1552.232-74 Section 1552.232-74 Federal Acquisition Regulations System... Provisions and Clauses 1552.232-74 Payments—simplified acquisition procedures financing. As prescribed in... acquisition procedures financing. Payments—Simplified Acquisition Procedures Financing (JUN 2006) Simplified...

  2. International piping benchmarks: use of the simplified code PACE 2. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, J T; Spence, J [University of Strathclyde (United Kingdom); Blundell, C [Risley Nuclear Power Development Establishment, Central Technical Services, Risley, Warrington (United Kingdom); ed.

    1979-05-15

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost.

  3. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  4. A Cognitive Computing Approach for Classification of Complaints in the Insurance Industry

    Science.gov (United States)

    Forster, J.; Entrup, B.

    2017-10-01

    In this paper we present and evaluate a cognitive computing approach for classification of dissatisfaction and four complaint specific complaint classes in correspondence documents between insurance clients and an insurance company. A cognitive computing approach includes the combination classical natural language processing methods, machine learning algorithms and the evaluation of hypothesis. The approach combines a MaxEnt machine learning algorithm with language modelling, tf-idf and sentiment analytics to create a multi-label text classification model. The result is trained and tested with a set of 2500 original insurance communication documents written in German, which have been manually annotated by the partnering insurance company. With a F1-Score of 0.9, a reliable text classification component has been implemented and evaluated. A final outlook towards a cognitive computing insurance assistant is given in the end.

  5. Computer-Aided Manufacturing of 3D Workpieces

    OpenAIRE

    Cornelia Victoria Anghel Drugarin; Mihaela Dorica Stroia

    2017-01-01

    Computer-Aided Manufacturing (CAM) assumes to use dedicated software for controlling machine tools and similar devices in the process of workpieces manufacturing. CAM is, in fact, an application technology that uses computer software and machinery to simplify and automate manufacturing processes. CAM is the inheritor of computer-aided engineering (CAE) and is often used conjunctively with computer-aided design (CAD). Advanced CAM solutions are forthcoming and have a large ...

  6. Conservatism inherent to simplified qualification techniques used for piping steady state vibration

    International Nuclear Information System (INIS)

    Olson, D.E.; Smetters, J.L.

    1983-01-01

    This paper examines some of the qualification techniques currently used by the power industry, including the techniques specified in a recently issued standard related to this subject (ANSI/ASME OM-3, Requirements for Preoperational and Initial Startup Vibration Testing of Nuclear Power Plant Piping Systems). Several methods are used to demonstrate the amount of conservatism inherent in these techniques. Allowable limits calculated by the use of simplified techniques are compared to limits calculated by more detailed computer analysis. A portion of a reactor feedwater piping system along with the results of a piping vibration monitoring program recently completed in a nuclear power plant are used as case studies. The limits determined by the use of simplified criteria are also compared to limits determined empirically through the use of strain gauges. The simple beam analogies that use vibrational displacement as acceptance criteria were found to be conservative for all the examples studied. However, when velocity was used as a criterion, it was not always conservative. Simplified techniques that result in displacement allowables appear to be the most viable method of qualifying piping vibrations. Quantities referred to in the paper are cited in British units throughout. These may be converted to the International System of Units (SI) as follows: 1 foot=0.3048 meter; 1 inch=0.0254 meter=1,000 mils; 1 psi=6,894 pascals; and 1 inch/second=0.0254 meter/second. (orig.)

  7. Simplified non-linear time-history analysis based on the Theory of Plasticity

    DEFF Research Database (Denmark)

    Costa, Joao Domingues

    2005-01-01

    This paper aims at giving a contribution to the problem of developing simplified non-linear time-history (NLTH) analysis of structures which dynamical response is mainly governed by plastic deformations, able to provide designers with sufficiently accurate results. The method to be presented...... is based on the Theory of Plasticity. Firstly, the formulation and the computational procedure to perform time-history analysis of a rigid-plastic single degree of freedom (SDOF) system are presented. The necessary conditions for the method to incorporate pinching as well as strength degradation...

  8. 3D Bearing Capacity of Structured Cells Supported on Cohesive Soil: Simplified Analysis Method

    Directory of Open Access Journals (Sweden)

    Martínez-Galván Sergio Antonio

    2013-06-01

    Full Text Available In this paper a simplified analysis method to compute the bearing capacity of structured cell foundations subjected to vertical loading and supported in soft cohesive soil is proposed. A structured cell is comprised by a top concrete slab structurally connected to concrete external walls that enclose the natural soil. Contrary to a box foundation it does not include a bottom slab and hence, the soil within the walls becomes an important component of the structured cell. This simplified method considers the three-dimensional geometry of the cell, the undrained shear strength of cohesive soils and the existence of structural continuity between the top concrete slab and the surrounding walls, along the walls themselves and the walls structural joints. The method was developed from results of numerical-parametric analyses, from which it was found that structured cells fail according to a punching-type mechanism.

  9. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  10. Simplified method evaluation for piping elastic follow-up

    International Nuclear Information System (INIS)

    Severud, L.K.

    1983-05-01

    A proposed simplified method for evaluating elastic follow-up effects in high temperature pipelines is presented. The method was evaluated by comparing the simplified analysis results with those obtained from detailed inelastic solutions. Nine different pipelines typical of a nuclear breeder reactor power plant were analyzed; the simplified method is attractive because it appears to give fairly accurate and conservative results. It is easy to apply and inexpensive since it employs iterative elastic solutions for the pipeline coupled with the readily available isochronous stress-strain data provided in the ASME Code

  11. The simplified convergence rate calculation for salt grit backfilled caverns in rock salt

    International Nuclear Information System (INIS)

    Navarro, Martin

    2013-03-01

    Within the research and development project 3609R03210 of the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, different methods were investigated, which are used for the simplified calculation of convergence rates for mining cavities in salt rock that have been backfilled with crushed salt. The work concentrates on the approach of Stelte and on further developments based on this approach. The work focuses on the physical background of the approaches. Model specific limitations are discussed and possibilities for further development are pointed out. Further on, an alternative approach is presented, which implements independent material laws for the convergence of the mining cavity and the compaction of the crushed salt backfill.

  12. D-Wave's Approach to Quantum Computing: 1000-qubits and Counting!

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    In this talk I will describe D-Wave's approach to quantum computing, including the system architecture of our 1000-qubit D-Wave 2X, its programming model, and performance benchmarks. Furthermore, I will describe how the native optimization and sampling capabilities of the quantum processor can be exploited to tackle problems in a variety of fields including medicine, machine learning, physics, and computational finance.

  13. Continuous-variable geometric phase and its manipulation for quantum computation in a superconducting circuit.

    Science.gov (United States)

    Song, Chao; Zheng, Shi-Biao; Zhang, Pengfei; Xu, Kai; Zhang, Libo; Guo, Qiujiang; Liu, Wuxin; Xu, Da; Deng, Hui; Huang, Keqiang; Zheng, Dongning; Zhu, Xiaobo; Wang, H

    2017-10-20

    Geometric phase, associated with holonomy transformation in quantum state space, is an important quantum-mechanical effect. Besides fundamental interest, this effect has practical applications, among which geometric quantum computation is a paradigm, where quantum logic operations are realized through geometric phase manipulation that has some intrinsic noise-resilient advantages and may enable simplified implementation of multi-qubit gates compared to the dynamical approach. Here we report observation of a continuous-variable geometric phase and demonstrate a quantum gate protocol based on this phase in a superconducting circuit, where five qubits are controllably coupled to a resonator. Our geometric approach allows for one-step implementation of n-qubit controlled-phase gates, which represents a remarkable advantage compared to gate decomposition methods, where the number of required steps dramatically increases with n. Following this approach, we realize these gates with n up to 4, verifying the high efficiency of this geometric manipulation for quantum computation.

  14. Generic simplified simulation model for DFIG with active crowbar

    Energy Technology Data Exchange (ETDEWEB)

    Buendia, Francisco Jimenez [Gamesa Innovation and Technology, Sarriguren, Navarra (Spain). Technology Dept.; Barrasa Gordo, Borja [Assystem Iberia, Bilbao, Vizcaya (Spain)

    2012-07-01

    Simplified models for transient stability studies are a general requirement for transmission system operators to wind turbine (WTG) manufacturers. Those models must represent the performance of the WTGs for transient stability studies, mainly voltage dips originated by short circuits in the electrical network. Those models are implemented in simulation software as PSS/E, DigSilent or PSLF. Those software platforms allow simulation of transients in large electrical networks with thousands of busses, generators and loads. The high complexity of the grid requires that the models inserted into the grid should be simplified in order to allow the simulations being executed as fast as possible. The development of a model which is simplified enough to be integrated in those complex grids and represent the performance of WTG is a challenge. The IEC TC88 working group has developed generic models for different types of generators, among others for WTGs using doubly fed induction generators (DFIG). This paper will focus in an extension of the models for DFIG WTGs developed in IEC in order to be able to represent the simplified model of DFIG with an active crowbar, which is required to withstand voltage dips without disconnecting from the grid. This paper improves current generic model of Type 3 for DFIG adding a simplified version of the generator including crowbar functionality and a simplified version of the crowbar firing. In addition, this simplified model is validated by correlation with voltage dip field test from a real wind turbine. (orig.)

  15. Simplified Dark Matter Models

    OpenAIRE

    Morgante, Enrico

    2018-01-01

    I review the construction of Simplified Models for Dark Matter searches. After discussing the philosophy and some simple examples, I turn the attention to the aspect of the theoretical consistency and to the implications of the necessary extensions of these models.

  16. Hypersonic Vehicle Propulsion System Simplified Model Development

    Science.gov (United States)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  17. Granular-relational data mining how to mine relational data in the paradigm of granular computing ?

    CERN Document Server

    Hońko, Piotr

    2017-01-01

    This book provides two general granular computing approaches to mining relational data, the first of which uses abstract descriptions of relational objects to build their granular representation, while the second extends existing granular data mining solutions to a relational case. Both approaches make it possible to perform and improve popular data mining tasks such as classification, clustering, and association discovery. How can different relational data mining tasks best be unified? How can the construction process of relational patterns be simplified? How can richer knowledge from relational data be discovered? All these questions can be answered in the same way: by mining relational data in the paradigm of granular computing! This book will allow readers with previous experience in the field of relational data mining to discover the many benefits of its granular perspective. In turn, those readers familiar with the paradigm of granular computing will find valuable insights on its application to mining r...

  18. 48 CFR 3032.003 - Simplified acquisition procedures financing.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Simplified acquisition procedures financing. 3032.003 Section 3032.003 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... FINANCING Scope of Part 3032.003 Simplified acquisition procedures financing. Contract financing may be...

  19. Simplifying and upscaling water resources systems models that combine natural and engineered components

    Science.gov (United States)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  20. The cost of simplifying air travel when modeling disease spread.

    Directory of Open Access Journals (Sweden)

    Justin Lessler

    Full Text Available BACKGROUND: Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. METHODOLOGY/PRINCIPAL FINDINGS: Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed introductions of disease is small (<1 per day but for a few routes this rate is greatly underestimated by the pipe model. CONCLUSIONS/SIGNIFICANCE: If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  1. The soft computing-based approach to investigate allergic diseases: a systematic review.

    Science.gov (United States)

    Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano

    2017-01-01

    Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.

  2. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  3. A simplified Excel® algorithm for estimating the least limiting water range of soils

    Directory of Open Access Journals (Sweden)

    Leão Tairone Paiva

    2004-01-01

    Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.

  4. Screening efficacy of a simplified logMAR chart

    Directory of Open Access Journals (Sweden)

    Naganathan Muthuramalingam

    2016-04-01

    Aim: This study evaluates the efficacy of a simplified logMAR chart, designed for VA testing over the conventional Snellen chart, in a school-based vision-screening programme. Methods: We designed a simplified logMAR chart by employing the principles of the Early Treatment Diabetic Retinopathy Study (ETDRS chart in terms of logarithmic letter size progression, inter-letter spacing, and inter-line spacing. Once the simplified logMAR chart was validated by students in the Elite school vision-screening programme, we set out to test the chart in 88 primary and middle schools in the Tiruporur block of Kancheepuram district in Tamil Nadu. One school teacher in each school was trained to screen a cross-sectional population of 10 354 primary and secondary school children (girls: 5488; boys: 4866 for VA deficits using a new, simplified logMAR algorithm. An experienced paediatric optometrist was recruited to validate the screening methods and technique used by the teachers to collect the data. Results: The optometrist screened a subset of 1300 school children from the total sample. The optometrist provided the professional insights needed to validate the clinical efficacy of the simplified logMAR algorithm and verified the reliability of the data collected by the teachers. The mean age of children sampled for validation was 8.6 years (range: 9–14 years. The sensitivity and the specificity of the simplified logMAR chart when compared to the standard logMAR chart were found to be 95% and 98%, respectively. Kappa value was 0.97. Sensitivity of the teachers’ screening was 66.63% (95% confidence interval [CI]: 52.73–77.02 and the specificity was 98.33% (95% CI: 97.49–98.95. Testing of VA was done under substandard illumination levels in 87% of the population. A total of 10 354 children were screened, 425 of whom were found to have some form of visual and/or ocular defect that was identified by the teacher or optometrist. Conclusion: The simplified logMAR testing algorithm

  5. Development of simplified nuclear dry plate measuring system

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Y; Ohta, I [Utsunomiya Univ. (Japan). Faculty of Education; Tezuka, I; Tezuka, T; Makino, K

    1981-08-01

    A simplified nuclear dry plate measuring system was developed. The system consists of a microscope, an ITV camera, a monitor TV, an XY tracker and a micro-computer. The signals of the images of tracks in a nuclear dry plate are sent to the XY tracker, and shown on the monitor TV. The XY tracker displays a pointer on the monitor TV, and makes the output signal of its XY coordinate. This output signal is analyzed by the microcomputer. The software for the measuring process is composed of a program system written in BASIC and the machine language. The data in take, the expansion of the range of measurement and the output of analyzed data are controlled by the program. The accuracy of the measurement of coordinate was studied, and was about 0.39 micrometer for 10 micrometer distance.

  6. Simplified discrete ordinates method in spherical geometry

    International Nuclear Information System (INIS)

    Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.

    1999-01-01

    The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations

  7. Simplified analysis of laterally loaded pile groups

    Directory of Open Access Journals (Sweden)

    F.M. Abdrabbo

    2012-06-01

    Full Text Available The response of laterally loaded pile groups is a complicated soil–structure interaction problem. Although fairly reliable methods are developed to predicate the lateral behavior of single piles, the lateral response of pile groups has attracted less attention due to the required high cost and complication implication. This study presents a simplified method to analyze laterally loaded pile groups. The proposed method implements p-multiplier factors in combination with the horizontal modulus of subgrade reaction. Shadowing effects in closely spaced piles in a group were taken into consideration. It is proven that laterally loaded piles embedded in sand can be analyzed within the working load range assuming a linear relationship between lateral load and lateral displacement. The proposed method estimates the distribution of lateral loads among piles in a pile group and predicts the safe design lateral load of a pile group. The benefit of the proposed method is in its simplicity for the preliminary design stage with a little computational effort.

  8. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  9. Simplified design of switching power supplies

    CERN Document Server

    Lenk, John

    1995-01-01

    * Describes the operation of each circuit in detail * Examines a wide selection of external components that modify the IC package characteristics * Provides hands-on, essential information for designing a switching power supply Simplified Design of Switching Power Supplies is an all-inclusive, one-stop guide to switching power-supply design. Step-by-step instructions and diagrams render this book essential for the student and the experimenter, as well as the design professional. Simplified Design of Switching Power Supplies concentrates on the use of IC regulators. All popular forms of swit

  10. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  11. A computational approach to animal breeding.

    Science.gov (United States)

    Berger-Wolf, Tanya Y; Moore, Cristopher; Saia, Jared

    2007-02-07

    We propose a computational model of mating strategies for controlled animal breeding programs. A mating strategy in a controlled breeding program is a heuristic with some optimization criteria as a goal. Thus, it is appropriate to use the computational tools available for analysis of optimization heuristics. In this paper, we propose the first discrete model of the controlled animal breeding problem and analyse heuristics for two possible objectives: (1) breeding for maximum diversity and (2) breeding a target individual. These two goals are representative of conservation biology and agricultural livestock management, respectively. We evaluate several mating strategies and provide upper and lower bounds for the expected number of matings. While the population parameters may vary and can change the actual number of matings for a particular strategy, the order of magnitude of the number of expected matings and the relative competitiveness of the mating heuristics remains the same. Thus, our simple discrete model of the animal breeding problem provides a novel viable and robust approach to designing and comparing breeding strategies in captive populations.

  12. Computation within the auxiliary field approach

    International Nuclear Information System (INIS)

    Baeurle, S.A.

    2003-01-01

    Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies

  13. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  14. Simplified modeling of liquid-liquid heat exchangers for use in control systems

    International Nuclear Information System (INIS)

    Laszczyk, Piotr

    2017-01-01

    For last decades various models of heat exchange processes have been developed to capture their specific dynamic nature. These models have different degrees of complexity depending on modeling assumptions and simplifications. Complexity of mathematical model can be very critical when the model is to be a basis for deriving the control law because it directly affects the complexity of mathematical transformations and complexity of final control algorithm. In this paper, the simplified cross convection model for wide class of heat exchangers is suggested. Apart from very few reports so far, the properties of this modeling approach have never been investigated in detail. The concept for this model is derived from the fundamental principle of energy conservation and combined with a simple dynamical approximation in the form of ordinary differential equations. Within this framework, the simplified tuning procedure of the proposed model is suggested and verified for plate and spiral tube heat exchangers based on experimental data. The dynamical properties and stability of the suggested model are addressed and sensitivity analysis is also presented. It is shown that such a modeling approach preserves high modeling accuracy at very low numerical complexity. The validation results show that the suggested modeling and tuning method is useful for practical applications.

  15. Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.

    Science.gov (United States)

    He, A; Deepan, B; Quan, C

    2017-09-01

    A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.

  16. Computer and Internet Addiction: Analysis and Classification of Approaches

    Directory of Open Access Journals (Sweden)

    Zaretskaya O.V.

    2017-08-01

    Full Text Available The theoretical analysis of modern research works on the problem of computer and Internet addiction is carried out. The main features of different approaches are outlined. The attempt is made to systematize researches conducted and to classify scientific approaches to the problem of Internet addiction. The author distinguishes nosological, cognitive-behavioral, socio-psychological and dialectical approaches. She justifies the need to use an approach that corresponds to the essence, goals and tasks of social psychology in the field of research as the problem of Internet addiction, and the dependent behavior in general. In the opinion of the author, this dialectical approach integrates the experience of research within the framework of the socio-psychological approach and focuses on the observed inconsistencies in the phenomenon of Internet addiction – the compensatory nature of Internet activity, when people who are interested in the Internet are in a dysfunctional life situation.

  17. Computational Approaches for Prediction of Pathogen-Host Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Esmaeil eNourani

    2015-02-01

    Full Text Available Infectious diseases are still among the major and prevalent health problems, mostly because of the drug resistance of novel variants of pathogens. Molecular interactions between pathogens and their hosts are the key part of the infection mechanisms. Novel antimicrobial therapeutics to fight drug resistance is only possible in case of a thorough understanding of pathogen-host interaction (PHI systems. Existing databases, which contain experimentally verified PHI data, suffer from scarcity of reported interactions due to the technically challenging and time consuming process of experiments. This has motivated many researchers to address the problem by proposing computational approaches for analysis and prediction of PHIs. The computational methods primarily utilize sequence information, protein structure and known interactions. Classic machine learning techniques are used when there are sufficient known interactions to be used as training data. On the opposite case, transfer and multi task learning methods are preferred. Here, we present an overview of these computational approaches for PHI prediction, discussing their weakness and abilities, with future directions.

  18. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  19. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  20. Simplified application of electronic data processing in a natural science and technology special library in combination with an improved literature description

    Energy Technology Data Exchange (ETDEWEB)

    Bretnuetz, E.

    1975-10-01

    A pilot project in a special library for natural science and technology to record bibliographic data on several kinds of literature within a simplified scheme and to process them in a computer by simple programs is described. The printout consists of several lists arranged according to several aspects. At the same time a relevant thesaurus is tested as to its suitability for an improved description of the literature. The results show that the literature handled is identified sufficiently within this simplified scheme. After supplementation by some special terms, the thesaurus can be used for a deeper analysis of the literature. (auth)

  1. An Approach for Indoor Path Computation among Obstacles that Considers User Dimension

    Directory of Open Access Journals (Sweden)

    Liu Liu

    2015-12-01

    Full Text Available People often transport objects within indoor environments, who need enough space for the motion. In such cases, the accessibility of indoor spaces relies on the dimensions, which includes a person and her/his operated objects. This paper proposes a new approach to avoid obstacles and compute indoor paths with respect to the user dimension. The approach excludes inaccessible spaces for a user in five steps: (1 compute the minimum distance between obstacles and find the inaccessible gaps; (2 group obstacles according to the inaccessible gaps; (3 identify groups of obstacles that influence the path between two locations; (4 compute boundaries for the selected groups; and (5 build a network in the accessible area around the obstacles in the room. Compared to the Minkowski sum method for outlining inaccessible spaces, the proposed approach generates simpler polygons for groups of obstacles that do not contain inner rings. The creation of a navigation network becomes easier based on these simple polygons. By using this approach, we can create user- and task-specific networks in advance. Alternatively, the accessible path can be generated on the fly before the user enters a room.

  2. A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Aleksander Byrski

    2010-01-01

    Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.

  3. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2012-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction....

  4. Multireference second order perturbation theory with a simplified treatment of dynamical correlation.

    Science.gov (United States)

    Xu, Enhua; Zhao, Dongbo; Li, Shuhua

    2015-10-13

    A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.

  5. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  6. Creating the computer player: an engaging and collaborative approach to introduce computational thinking by combining ‘unplugged’ activities with visual programming

    Directory of Open Access Journals (Sweden)

    Anna Gardeli

    2017-11-01

    Full Text Available Ongoing research is being conducted on appropriate course design, practices and teacher interventions for improving the efficiency of computer science and programming courses in K-12 education. The trend is towards a more constructivist problem-based learning approach. Computational thinking, which refers to formulating and solving problems in a form that can be efficiently processed by a computer, raises an important educational challenge. Our research aims to explore possible ways of enriching computer science teaching with a focus on development of computational thinking. We have prepared and evaluated a learning intervention for introducing computer programming to children between 10 and 14 years old; this involves students working in groups to program the behavior of the computer player of a well-known game. The programming process is split into two parts. First, students design a high-level version of their algorithm during an ‘unplugged’ pen & paper phase, and then they encode their solution as an executable program in a visual programming environment. Encouraging evaluation results have been achieved regarding the educational and motivational value of the proposed approach.

  7. Data science in R a case studies approach to computational reasoning and problem solving

    CERN Document Server

    Nolan, Deborah

    2015-01-01

    Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar

  8. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  9. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  10. Actual evapotranspiration modeling using the operational Simplified Surface Energy Balance (SSEBop) approach

    Science.gov (United States)

    Savoca, Mark E.; Senay, Gabriel B.; Maupin, Molly A.; Kenny, Joan F.; Perry, Charles A.

    2013-01-01

    Remote-sensing technology and surface-energy-balance methods can provide accurate and repeatable estimates of actual evapotranspiration (ETa) when used in combination with local weather datasets over irrigated lands. Estimates of ETa may be used to provide a consistent, accurate, and efficient approach for estimating regional water withdrawals for irrigation and associated consumptive use (CU), especially in arid cropland areas that require supplemental water due to insufficient natural supplies from rainfall, soil moisture, or groundwater. ETa in these areas is considered equivalent to CU, and represents the part of applied irrigation water that is evaporated and/or transpired, and is not available for immediate reuse. A recent U.S. Geological Survey study demonstrated the application of the remote-sensing-based Simplified Surface Energy Balance (SSEB) model to estimate 10-year average ETa at 1-kilometer resolution on national and regional scales, and compared those ETa values to the U.S. Geological Survey’s National Water-Use Information Program’s 1995 county estimates of CU. The operational version of the operational SSEB (SSEBop) method is now used to construct monthly, county-level ETa maps of the conterminous United States for the years 2000, 2005, and 2010. The performance of the SSEBop was evaluated using eddy covariance flux tower datasets compiled from 2005 datasets, and the results showed a strong linear relationship in different land cover types across diverse ecosystems in the conterminous United States (correlation coefficient [r] ranging from 0.75 to 0.95). For example, r for woody savannas (0.75), grassland (0.75), forest (0.82), cropland (0.84), shrub land (0.89), and urban (0.95). A comparison of the remote-sensing SSEBop method for estimating ETa and the Hamon temperature method for estimating potential ET (ETp) also was conducted, using regressions of all available county averages of ETa for 2005 and 2010, and yielded correlations of r = 0

  11. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  12. Application of Blind Quantum Computation to Two-Party Quantum Computation

    Science.gov (United States)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-03-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  13. Application of Blind Quantum Computation to Two-Party Quantum Computation

    Science.gov (United States)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-06-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  14. Developing a simplified consent form for biobanking.

    Science.gov (United States)

    Beskow, Laura M; Friedman, Joëlle Y; Hardy, N Chantelle; Lin, Li; Weinfurt, Kevin P

    2010-10-08

    Consent forms have lengthened over time and become harder for participants to understand. We sought to demonstrate the feasibility of creating a simplified consent form for biobanking that comprises the minimum information necessary to meet ethical and regulatory requirements. We then gathered preliminary data concerning its content from hypothetical biobank participants. We followed basic principles of plain-language writing and incorporated into a 2-page form (not including the signature page) those elements of information required by federal regulations and recommended by best practice guidelines for biobanking. We then recruited diabetes patients from community-based practices and randomized half (n = 56) to read the 2-page form, first on paper and then a second time on a tablet computer. Participants were encouraged to use "More information" buttons on the electronic version whenever they had questions or desired further information. These buttons led to a series of "Frequently Asked Questions" (FAQs) that contained additional detailed information. Participants were asked to identify specific sentences in the FAQs they thought would be important if they were considering taking part in a biorepository. On average, participants identified 7 FAQ sentences as important (mean 6.6, SD 14.7, range: 0-71). No one sentence was highlighted by a majority of participants; further, 34 (60.7%) participants did not highlight any FAQ sentences. Our preliminary findings suggest that our 2-page form contains the information that most prospective participants identify as important. Combining simplified forms with supplemental material for those participants who desire more information could help minimize consent form length and complexity, allowing the most substantively material information to be better highlighted and enabling potential participants to read the form and ask questions more effectively.

  15. Improved Off-Shell Scattering Amplitudes in String Field Theory and New Computational Methods

    CERN Document Server

    Park, I Y; Bars, Itzhak

    2004-01-01

    We report on new results in Witten's cubic string field theory for the off-shell factor in the 4-tachyon amplitude that was not fully obtained explicitly before. This is achieved by completing the derivation of the Veneziano formula in the Moyal star formulation of Witten's string field theory (MSFT). We also demonstrate detailed agreement of MSFT with a number of on-shell and off-shell computations in other approaches to Witten's string field theory. We extend the techniques of computation in MSFT, and show that the j=0 representation of SL(2,R) generated by the Virasoro operators $L_{0},L_{\\pm1}$ is a key structure in practical computations for generating numbers. We provide more insight into the Moyal structure that simplifies string field theory, and develop techniques that could be applied more generally, including nonperturbative processes.

  16. Simplified models of dark matter with a long-lived co-annihilation partner

    Science.gov (United States)

    Khoze, Valentin V.; Plascencia, Alexis D.; Sakurai, Kazuki

    2017-06-01

    We introduce a new set of simplified models to address the effects of 3-point interactions between the dark matter particle, its dark co-annihilation partner, and the Standard Model degree of freedom, which we take to be the tau lepton. The contributions from dark matter co-annihilation channels are highly relevant for a determination of the correct relic abundance. We investigate these effects as well as the discovery potential for dark matter co-annihilation partners at the LHC. A small mass splitting between the dark matter and its partner is preferred by the co-annihilation mechanism and suggests that the co-annihilation partners may be long-lived (stable or meta-stable) at collider scales. It is argued that such long-lived electrically charged particles can be looked for at the LHC in searches of anomalous charged tracks. This approach and the underlying models provide an alternative/complementarity to the mono-jet and multi-jet based dark matter searches widely used in the context of simplified models with s-channel mediators. We consider four types of simplified models with different particle spins and coupling structures. Some of these models are manifestly gauge invariant and renormalizable, others would ultimately require a UV completion. These can be realised in terms of supersymmetric models in the neutralino-stau co-annihilation regime, as well as models with extra dimensions or composite models.

  17. Mathematics of shape description a morphological approach to image processing and computer graphics

    CERN Document Server

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  18. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  19. Simplified Method of Optimal Sizing of a Renewable Energy Hybrid System for Schools

    Directory of Open Access Journals (Sweden)

    Jiyeon Kim

    2016-11-01

    Full Text Available Schools are a suitable public building for renewable energy systems. Renewable energy hybrid systems (REHSs have recently been introduced in schools following a new national regulation that mandates renewable energy utilization. An REHS combines the common renewable-energy sources such as geothermal heat pumps, solar collectors for water heating, and photovoltaic systems with conventional energy systems (i.e., boilers and air-source heat pumps. Optimal design of an REHS by adequate sizing is not a trivial task because it usually requires intensive work including detailed simulation and demand/supply analysis. This type of simulation-based approach for optimization is difficult to implement in practice. To address this, this paper proposes simplified sizing equations for renewable-energy systems of REHSs. A conventional optimization process is used to calculate the optimal combinations of an REHS for cases of different numbers of classrooms and budgets. On the basis of the results, simplified sizing equations that use only the number of classrooms as the input are proposed by regression analysis. A verification test was carried out using an initial conventional optimization process. The results show that the simplified sizing equations predict similar sizing results to the initial process, consequently showing similar capital costs within a 2% error.

  20. New weak keys in simplified IDEA

    Science.gov (United States)

    Hafman, Sari Agustini; Muhafidzah, Arini

    2016-02-01

    Simplified IDEA (S-IDEA) is simplified version of International Data Encryption Algorithm (IDEA) and useful teaching tool to help students to understand IDEA. In 2012, Muryanto and Hafman have found a weak key class in the S-IDEA by used differential characteristics in one-round (0, ν, 0, ν) → (0,0, ν, ν) on the first round to produce input difference (0,0, ν, ν) on the fifth round. Because Muryanto and Hafman only use three differential characteristics in one-round, we conducted a research to find new differential characteristics in one-round and used it to produce new weak key classes of S-IDEA. To find new differential characteristics in one-round of S-IDEA, we applied a multiplication mod 216+1 on input difference and combination of active sub key Z1, Z4, Z5, Z6. New classes of weak keys are obtained by combining all of these characteristics and use them to construct two new differential characteristics in full-round of S-IDEA with or without the 4th round sub key. In this research, we found six new differential characteristics in one round and combined them to construct two new differential characteristics in full-round of S-IDEA. When two new differential characteristics in full-round of S-IDEA are used and the 4th round sub key required, we obtain 2 new classes of weak keys, 213 and 28. When two new differential characteristics in full-round of S-IDEA are used, yet the 4th round sub key is not required, the weak key class of 213 will be 221 and 28 will be 210. Membership test can not be applied to recover the key bits in those weak key classes. The recovery of those unknown key bits can only be done by using brute force attack. The simulation result indicates that the bit of the key can be recovered by the longest computation time of 0,031 ms.

  1. A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems

    International Nuclear Information System (INIS)

    Huang, Chia-Ling

    2015-01-01

    This paper proposes a new swarm intelligence method known as the Particle-based Simplified Swarm Optimization (PSSO) algorithm while undertaking a modification of the Updating Mechanism (UM), called N-UM and R-UM, and simultaneously applying an Orthogonal Array Test (OA) to solve reliability–redundancy allocation problems (RRAPs) successfully. One difficulty of RRAP is the need to maximize system reliability in cases where the number of redundant components and the reliability of corresponding components in each subsystem are simultaneously decided with nonlinear constraints. In this paper, four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming. When the computational results are compared with those of previously developed algorithms in existing literature, the findings indicate that the proposed PSSO is highly competitive and performs well. - Highlights: • This paper proposes a particle-based simplified swarm optimization algorithm (PSSO) to optimize RRAP. • Furthermore, the UM and an OA are adapted to advance in optimizing RRAP. • Four systems are introduced and the results demonstrate the PSSO performs particularly well

  2. Simplified Qualitative Discrete Numerical Model to Determine Cracking Pattern in Brittle Materials by Means of Finite Element Method

    Directory of Open Access Journals (Sweden)

    J. Ochoa-Avendaño

    2017-01-01

    Full Text Available This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending beams are simulated, where the cracking pattern calculated with the proposed numerical model is similar to experimental result. The advantages of the proposed model compared to discrete crack approaches with interface elements can be the implementation simplicity, the numerical stability, and the very low computational cost. The simulation with greater values of the initial stiffness of the link elements does not affect the discontinuity path and the stability of the numerical solution. The exploded mesh procedure presented in this model avoids a complex nonlinear analysis and regenerative or adaptive meshes.

  3. Simplified design of flexible expansion anchored plates for nuclear structures

    International Nuclear Information System (INIS)

    Mehta, N.K.; Hingorani, N.V.; Longlais, T.G.; Sargent and Lundy, Chicago, IL)

    1984-01-01

    In nuclear power plant construction, expansion anchored plates are used to support pipe, cable tray and HVAC duct hangers, and various structural elements. The expansion anchored plates provide flexibility in the installation of field-routed lines where cast-in-place embedments are not available. General design requirements for expansion anchored plate assemblies are given in ACI 349, Appendix B (1). The manufacturers recommend installation procedures for their products. Recent field testing in response to NRC Bulletin 79-02 (2) indicates that anchors, installed in accordance with manufacturer's recommended procedures, perform satisfactorily under static and dynamic loading conditions. Finite element analysis is a useful tool to correctly analyze the expansion anchored plates subject to axial tension and biaxial moments, but it becomes expensive and time-consuming to apply this tool for a large number of plates. It is, therefore, advantageous to use a simplified method, even though it may be more conservative as compared to the exact method of analysis. This paper presents a design method referred to as the modified rigid plate analysis approach to simplify both the initial design and the review of as-built conditions

  4. Simplified two-fluid current–voltage relation for superconductor transition-edge sensors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Tian-Shun; Chen, Jun-Kang [Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei City, Anhui Province 230026 (China); Zhang, Qing-Ya; Li, Tie-Fu; Liu, Jian-She [Institute of Microelectronics, Tsinghua University, Beijing 100084 (China); Chen, Wei, E-mail: weichen@tsinghua.edu.cn [Institute of Microelectronics, Tsinghua University, Beijing 100084 (China); Zhou, Xingxiang, E-mail: xizhou@ustc.edu.cn [Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei City, Anhui Province 230026 (China)

    2013-11-21

    We propose a simplified current–voltage (IV) relation for the analysis and simulation of superconductor transition-edge sensor (TES) circuits. Compared to the conventional approach based on the effective TES resistance, our expression describes the device behavior more thoroughly covering the superconducting, transitional, and normal-state for TES currents in both directions. We show how to use our IV relation to perform small-signal analysis and derive the device's temperature and current sensitivities based on its physical parameters. We further demonstrate that we can use our IV relation to greatly simplify TES device modeling and make SPICE simulation of TES circuits easily accessible. We present some interesting results as examples of valuable simulations enabled by our IV relation. -- Highlights: •We propose an IV relation for superconductor transition-edge sensors (TES). •We derive the dependence of the sensitivity of TES on its physical parameters. •We use our IV relation for SPICE modeling of TES device. •We present simulation results using device model based on our IV relation.

  5. Simplified two-fluid current–voltage relation for superconductor transition-edge sensors

    International Nuclear Information System (INIS)

    Wang, Tian-Shun; Chen, Jun-Kang; Zhang, Qing-Ya; Li, Tie-Fu; Liu, Jian-She; Chen, Wei; Zhou, Xingxiang

    2013-01-01

    We propose a simplified current–voltage (IV) relation for the analysis and simulation of superconductor transition-edge sensor (TES) circuits. Compared to the conventional approach based on the effective TES resistance, our expression describes the device behavior more thoroughly covering the superconducting, transitional, and normal-state for TES currents in both directions. We show how to use our IV relation to perform small-signal analysis and derive the device's temperature and current sensitivities based on its physical parameters. We further demonstrate that we can use our IV relation to greatly simplify TES device modeling and make SPICE simulation of TES circuits easily accessible. We present some interesting results as examples of valuable simulations enabled by our IV relation. -- Highlights: •We propose an IV relation for superconductor transition-edge sensors (TES). •We derive the dependence of the sensitivity of TES on its physical parameters. •We use our IV relation for SPICE modeling of TES device. •We present simulation results using device model based on our IV relation

  6. J evaluation by simplified method for cracked pipes under mechanical loading

    International Nuclear Information System (INIS)

    Lacire, M.H.; Michel, B.; Gilles, P.

    2001-01-01

    The integrity of structures behaviour is an important subject for the nuclear reactor safety. Most of assessment methods of cracked components are based on the evaluation of the parameter J. However to avoid complex elastic-plastic finite element calculations of J, a simplified method has been jointly developed by CEA, EDF and Framatome. This method, called Js, is based on the reference stress approach and a new KI handbook. To validate this method, a complete set of 2D and 3D elastic-plastic finite element calculations of J have been performed on pipes (more than 300 calculations are available) for different types of part through wall crack (circumferential or longitudinal); mechanical loading (pressure, bending moment, axial load, torsion moment, and combination of these loading); different kind of materials (austenitic or ferritic steel). This paper presents a comparison between the simplified assessment of J and finite element results on these configurations for mechanical loading. Then, validity of the method is discussed and an applicability domain is proposed. (author)

  7. Potential application of the consistency approach for vaccine potency testing.

    Science.gov (United States)

    Arciniega, J; Sirota, L A

    2012-01-01

    The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.

  8. Using simplified peer review processes to fund research: a prospective study

    Science.gov (United States)

    Herbert, Danielle L; Graves, Nicholas; Clarke, Philip; Barnett, Adrian G

    2015-01-01

    Objective To prospectively test two simplified peer review processes, estimate the agreement between the simplified and official processes, and compare the costs of peer review. Design, participants and setting A prospective parallel study of Project Grant proposals submitted in 2013 to the National Health and Medical Research Council (NHMRC) of Australia. The official funding outcomes were compared with two simplified processes using proposals in Public Health and Basic Science. The two simplified processes were: panels of 7 reviewers who met face-to-face and reviewed only the nine-page research proposal and track record (simplified panel); and 2 reviewers who independently reviewed only the nine-page research proposal (journal panel). The official process used panels of 12 reviewers who met face-to-face and reviewed longer proposals of around 100 pages. We compared the funding outcomes of 72 proposals that were peer reviewed by the simplified and official processes. Main outcome measures Agreement in funding outcomes; costs of peer review based on reviewers’ time and travel costs. Results The agreement between the simplified and official panels (72%, 95% CI 61% to 82%), and the journal and official panels (74%, 62% to 83%), was just below the acceptable threshold of 75%. Using the simplified processes would save $A2.1–$A4.9 million per year in peer review costs. Conclusions Using shorter applications and simpler peer review processes gave reasonable agreement with the more complex official process. Simplified processes save time and money that could be reallocated to actual research. Funding agencies should consider streamlining their application processes. PMID:26137884

  9. Simplified approach for quantitative calculations of optical pumping

    International Nuclear Information System (INIS)

    Atoneche, Fred; Kastberg, Anders

    2017-01-01

    We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks. (paper)

  10. Simplified approach for quantitative calculations of optical pumping

    Science.gov (United States)

    Atoneche, Fred; Kastberg, Anders

    2017-07-01

    We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks.

  11. A simplified LBB evaluation procedure for austenitic and ferritic steel piping

    International Nuclear Information System (INIS)

    Gamble, R.M.; Wichman, K.R.

    1997-01-01

    The NRC previously has approved application of LBB analysis as a means to demonstrate that the probability of pipe rupture was extremely low so that dynamic loads associated with postulated pipe break could be excluded from the design basis (1). The purpose of this work was to: (1) define simplified procedures that can be used by the NRC to compute allowable lengths for circumferential throughwall cracks and assess margin against pipe fracture, and (2) verify the accuracy of the simplified procedures by comparison with available experimental data for piping having circumferential throughwall flaws. The development of the procedures was performed using techniques similar to those employed to develop ASME Code flaw evaluation procedures. The procedures described in this report are applicable to pipe and pipe fittings with: (1) wrought austenitic steel (Ni-Cr-Fe alloy) having a specified minimum yield strength less than 45 ksi, and gas metal-arc, submerged arc and shielded metal-arc austentic welds, and (2) seamless or welded wrought carbon steel having a minimum yield strength not greater than 40 ksi, and associated weld materials. The procedures can be used for cast austenitic steel when adequate information is available to place the cast material toughness into one of the categories identified later in this report for austenitic wrought and weld materials

  12. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  13. A simplified LBB evaluation procedure for austenitic and ferritic steel piping

    Energy Technology Data Exchange (ETDEWEB)

    Gamble, R.M.; Wichman, K.R.

    1997-04-01

    The NRC previously has approved application of LBB analysis as a means to demonstrate that the probability of pipe rupture was extremely low so that dynamic loads associated with postulated pipe break could be excluded from the design basis (1). The purpose of this work was to: (1) define simplified procedures that can be used by the NRC to compute allowable lengths for circumferential throughwall cracks and assess margin against pipe fracture, and (2) verify the accuracy of the simplified procedures by comparison with available experimental data for piping having circumferential throughwall flaws. The development of the procedures was performed using techniques similar to those employed to develop ASME Code flaw evaluation procedures. The procedures described in this report are applicable to pipe and pipe fittings with: (1) wrought austenitic steel (Ni-Cr-Fe alloy) having a specified minimum yield strength less than 45 ksi, and gas metal-arc, submerged arc and shielded metal-arc austentic welds, and (2) seamless or welded wrought carbon steel having a minimum yield strength not greater than 40 ksi, and associated weld materials. The procedures can be used for cast austenitic steel when adequate information is available to place the cast material toughness into one of the categories identified later in this report for austenitic wrought and weld materials.

  14. A new approach in development of data flow control and investigation system for computer networks

    International Nuclear Information System (INIS)

    Frolov, I.; Vaguine, A.; Silin, A.

    1992-01-01

    This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)

  15. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  16. Development of long operating cycle simplified BWR

    International Nuclear Information System (INIS)

    Heki, H.; Nakamaru, M.; Maruya, T.; Hiraiwa, K.; Arai, K.; Narabayash, T.; Aritomi, M.

    2002-01-01

    This paper describes an innovative plant concept for long operating cycle simplified BWR (LSBWR) In this plant concept, 1) Long operating cycle ( 3 to 15 years), 2) Simplified systems and building, 3) Factory fabrication in module are discussed. Designing long operating core is based on medium enriched U-235 with burnable poison. Simplified systems and building are realized by using natural circulation with bottom located core, internal CRD and PCV with passive system and an integrated reactor and turbine building. This LSBWR concept will have make high degree of safety by IVR (In Vessel Retention) capability, large water inventory above the core region and no PCV vent to the environment due to PCCS (Passive Containment Cooling System) and internal vent tank. Integrated building concept could realize highly modular arrangement in hull structure (ship frame structure), ease of seismic isolation capability and high applicability of standardization and factory fabrication. (authors)

  17. Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.

    Science.gov (United States)

    Montalvo-Acosta, Joel José; Cecchini, Marco

    2016-12-01

    The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor

    Science.gov (United States)

    Pustovetov, M. Yu

    2018-03-01

    This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.

  19. Approach and tool for computer animation of fields in electrical apparatus

    International Nuclear Information System (INIS)

    Miltchev, Radoslav; Yatchev, Ivan S.; Ritchie, Ewen

    2002-01-01

    The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)

  20. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  1. Quantitative Investigation of the Technologies That Support Cloud Computing

    Science.gov (United States)

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  2. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  3. Use of an integrated modelling and simulation approach to develop a simplified peginterferon alfa-2a dosing regimen for children with hepatitis C.

    Science.gov (United States)

    Brennan, Barbara J; Lemenuel-Diot, Annabelle; Snoeck, Eric; McKenna, Michael; Solsky, Jonathan; Wat, Cynthia; Mallalieu, Navita L

    2016-04-01

    The aim of the study was to simplify the dosing regimen of peginterferon alfa-2a in paediatric patients with chronic hepatitis C. A population pharmacokinetic (PK) model was developed using PK data from 14 children aged 2-8 years and 402 adults. Simulations were produced to identify a simplified dosing regimen that would provide exposures similar to those observed in the paediatric clinical trials and in the range known to be safe/efficacious in adults. Model predictions were evaluated against observed adult and paediatric data to reinforce confidence of the proposed dosing regimen. The final model was a two compartment model with a zero order resorption process. Covariates included a linear influence of body surface area (BSA) on apparent oral clearance (CL/F) and a linear influence of body weight on apparent volume of distribution of the central compartment (V1 /F). A simplified dosing regimen was developed which is expected to provide exposures in children aged ≥5 years similar to the dosing formula used in the paediatric clinical trial and within the range that is safe/efficacious in adults. This simplified regimen is approved in the EU and in other countries for the treatment of chronic hepatitis C in treatment-naive children/adolescents aged ≥5 years in combination with ribavirin. Pre-existing adult PK data were combined with relatively limited paediatric PK data to develop a PK model able to predict exposure in both populations adequately. This provided increased confidence in characterizing PK in children and helped in the development of a simplified dosing regimen of peginterferon alfa-2a in paediatric patients. © 2015 The British Pharmacological Society.

  4. Capability and deficiency of the simplified model for energy calculation of commercial buildings in the Brazilian regulation

    NARCIS (Netherlands)

    Melo, A.P.; Lamberts, R.; Costola, D.; Hensen, J.L.M.

    2011-01-01

    This paper provides a preliminary assessment on the accuracy of the Brazilian regulation simplified model for commercial buildings. The first step was to compare its results with BESTEST. The study presents a straightforward approach to apply the BESTEST in other climates than the original one

  5. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  6. Real-time solution of the forward kinematics for a parallel haptic device using a numerical approach based on neural networks

    International Nuclear Information System (INIS)

    Liu, Guan Yang; Zhang, Yuru; Wang, Yan; Xie, Zheng

    2015-01-01

    This paper proposes a neural network (NN)-based approach to solve the forward kinematics of a 3-RRR spherical parallel mechanism designed for a haptic device. The proposed algorithm aims to remarkably speed up computation to meet the requirement of high frequency rendering for haptic display. To achieve high accuracy, the workspace of the haptic device is divided into smaller subspaces. The proposed algorithm contains NNs of two different precision levels: a rough estimation NN to identify the index of the subspace and several precise estimation networks with expected accuracy to calculate the forward kinematics. For continuous motion, the algorithm structure is further simplified to save internal memory and increase computing speed, which are critical for a haptic device control system running on an embedded platform. Compared with the mostly used Newton-Raphson method, the proposed algorithm and its simplified version greatly increase the calculation speed by about four times and 10 times, respectively, while achieving the same accuracy level. The proposed approach is of great significance for solving the forward kinematics of parallel mechanism used as haptic devices when high update frequency is needed but hardware resources are limited.

  7. Structural considerations for solar installers : an approach for small, simplified solar installations or retrofits.

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Elizabeth H.; Schindel, Kay (City of Madison, WI); Bosiljevac, Tom; Dwyer, Stephen F.; Lindau, William (Lindau Companies, Inc., Hudson, WI); Harper, Alan (City of Madison, WI)

    2011-12-01

    Structural Considerations for Solar Installers provides a comprehensive outline of structural considerations associated with simplified solar installations and recommends a set of best practices installers can follow when assessing such considerations. Information in the manual comes from engineering and solar experts as well as case studies. The objectives of the manual are to ensure safety and structural durability for rooftop solar installations and to potentially accelerate the permitting process by identifying and remedying structural issues prior to installation. The purpose of this document is to provide tools and guidelines for installers to help ensure that residential photovoltaic (PV) power systems are properly specified and installed with respect to the continuing structural integrity of the building.

  8. ceRNAs in plants: computational approaches and associated challenges for target mimic research.

    Science.gov (United States)

    Paschoal, Alexandre Rossi; Lozada-Chávez, Irma; Domingues, Douglas Silva; Stadler, Peter F

    2017-05-30

    The competing endogenous RNA hypothesis has gained increasing attention as a potential global regulatory mechanism of microRNAs (miRNAs), and as a powerful tool to predict the function of many noncoding RNAs, including miRNAs themselves. Most studies have been focused on animals, although target mimic (TMs) discovery as well as important computational and experimental advances has been developed in plants over the past decade. Thus, our contribution summarizes recent progresses in computational approaches for research of miRNA:TM interactions. We divided this article in three main contributions. First, a general overview of research on TMs in plants is presented with practical descriptions of the available literature, tools, data, databases and computational reports. Second, we describe a common protocol for the computational and experimental analyses of TM. Third, we provide a bioinformatics approach for the prediction of TM motifs potentially cross-targeting both members within the same or from different miRNA families, based on the identification of consensus miRNA-binding sites from known TMs across sequenced genomes, transcriptomes and known miRNAs. This computational approach is promising because, in contrast to animals, miRNA families in plants are large with identical or similar members, several of which are also highly conserved. From the three consensus TM motifs found with our approach: MIM166, MIM171 and MIM159/319, the last one has found strong support on the recent experimental work by Reichel and Millar [Specificity of plant microRNA TMs: cross-targeting of mir159 and mir319. J Plant Physiol 2015;180:45-8]. Finally, we stress the discussion on the major computational and associated experimental challenges that have to be faced in future ceRNA studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Combustion Safety Simplified Test Protocol Field Study

    Energy Technology Data Exchange (ETDEWEB)

    Brand, L. [Gas Technology Inst., Des Plaines, IL (United States); Cautley, D. [Gas Technology Inst., Des Plaines, IL (United States); Bohac, D. [Gas Technology Inst., Des Plaines, IL (United States); Francisco, P. [Gas Technology Inst., Des Plaines, IL (United States); Shen, L. [Gas Technology Inst., Des Plaines, IL (United States); Gloss, S. [Gas Technology Inst., Des Plaines, IL (United States)

    2015-11-01

    Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project team collected field data on 11 houses in 2015.

  10. The simplified P3 approach on a trigonal geometry in the nodal reactor code DYN3D

    International Nuclear Information System (INIS)

    Duerigen, S.; Fridman, E.

    2011-01-01

    DYN3D is a three-dimensional nodal diffusion code for steady-state and transient analyses of Light-Water Reactors with square and hexagonal fuel assembly geometries. Currently, several versions of the DYN3D code are available including a multi-group diffusion and a simplified P 3 (SP 3 ) neutron transport option. In this work, the multi-group SP 3 method based on trigonal-z geometry was developed. The method is applicable to the analysis of reactor cores with hexagonal fuel assemblies and allows flexible mesh refinement, which is of particular importance for WWER-type Pressurized Water Reactors as well as for innovative reactor concepts including block type High-Temperature Reactors and Sodium Fast Reactors. In this paper, the theoretical background for the trigonal SP 3 methodology is outlined and the results of a preliminary verification analysis are presented by means of a simplified WWER-440 core test example. The accordant cross sections and reference solutions were produced by the Monte Carlo code SERPENT. The DYN3D results are in good agreement with the reference solutions. The average deviation in the nodal power distribution is about 1%. (Authors)

  11. A Pythonic Approach for Computational Geosciences and Geo-Data Processing

    Science.gov (United States)

    Morra, G.; Yuen, D. A.; Lee, S. M.

    2016-12-01

    Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.

  12. Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.

    Science.gov (United States)

    Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian

    2017-05-01

    Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A simplified approach for evaluating multiple test outcomes and multiple disease states in relation to the exercise thallium-201 stress test in suspected coronary artery disease

    International Nuclear Information System (INIS)

    Pollock, S.G.; Watson, D.D.; Gibson, R.S.; Beller, G.A.; Kaul, S.

    1989-01-01

    This study describes a simplified approach for the interpretation of electrocardiographic and thallium-201 imaging data derived from the same patient during exercise. The 383 patients in this study had also undergone selective coronary arteriography within 3 months of the exercise test. This matrix approach allows for multiple test outcomes (both tests positive, both negative, 1 test positive and 1 negative) and multiple disease states (no coronary artery disease vs 1-vessel vs multivessel coronary artery disease). Because this approach analyzes the results of 2 test outcomes simultaneously rather than serially, it also negates the lack of test independence, if such an effect is present. It is also demonstrated that ST-segment depression on the electrocardiogram and defects on initial thallium-201 images provide conditionally independent information regarding the presence of coronary artery disease in patients without prior myocardial infarction. In contrast, ST-segment depression on the electrocardiogram and redistribution on the delayed thallium-201 images may not provide totally independent information regarding the presence of exercise-induced ischemia in patients with or without myocardial infarction

  14. Model and experiments to optimize co-adaptation in a simplified myoelectric control system

    Science.gov (United States)

    Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P. Y.; de Rugy, A.

    2018-04-01

    Objective. To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. Approach. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. Results. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. Significance. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this

  15. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2011-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction. We also report on an implementation of this approach, that shows that our protocol is more efficient than any previous one: For big enough circuits, we can evaluate more than 20000 Boolean gates per second...

  16. Efficacy of Simplified Habit-Reversal on Stuttering Treatment in Home , School and Clinic

    Directory of Open Access Journals (Sweden)

    Behshid Garrousi

    2011-07-01

    Full Text Available Objective: Stuttering is a disorder that can cause serious personal, emotional and social problems. Based on its pathophysiology, different treatment approaches to stuttering are controversial. Behavioral treatment has been shown to be effective for stuttering.   In this study were mentioned to effectiveness of Simplified Habit-reversal as a behavioral approach for treatment of stuttering in Iranian children. Materials & Methods: Twelve students participated in this study was selected from 350 stuttered children in schools. After baseline assessment in three setting, simplified Habit -reversal was done for the children. Assessment (baseline , treatment and booster sessions were done in the subject’s home, school and clinic Assessment sessions were done at first -2nd-3th-4th-6th-12th-13th months after treatment. Post hoc tests were done by using paired t-test. Level of significance were considered equal on lest their 0.03 Results: Stuttering frequency was decreased in multiple assessment at home (P<0.03.these changes was observed in two other setting (P<0.02 in school, P0.05. Conclusion: Future researches about effectiveness of this method, especially in older cases and cases with co-morbidity in both sexes, are recommended.

  17. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  18. Simplified models for dark matter face their consistent completions

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    2017-03-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  19. Direct Synthesis of Microwave Waveforms for Quantum Computing

    Science.gov (United States)

    Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew

    Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.

  20. Understanding alternative fluxes/effluxes through comparative metabolic pathway analysis of phylum actinobacteria using a simplified approach.

    Science.gov (United States)

    Verma, Mansi; Lal, Devi; Saxena, Anjali; Anand, Shailly; Kaur, Jasvinder; Kaur, Jaspreet; Lal, Rup

    2013-12-01

    Actinobacteria are known for their diverse metabolism and physiology. Some are dreadful human pathogens whereas some constitute the natural flora for human gut. Therefore, the understanding of metabolic pathways is a key feature for targeting the pathogenic bacteria without disturbing the symbiotic ones. A big challenge faced today is multiple drug resistance by Mycobacterium and other pathogens that utilize alternative fluxes/effluxes. With the availability of genome sequence, it is now feasible to conduct the comparative in silico analysis. Here we present a simplified approach to compare metabolic pathways so that the species specific enzyme may be traced and engineered for future therapeutics. The analyses of four key carbohydrate metabolic pathways, i.e., glycolysis, pyruvate metabolism, tri carboxylic acid cycle and pentose phosphate pathway suggest the presence of alternative fluxes. It was found that the upper pathway of glycolysis was highly variable in the actinobacterial genomes whereas lower glycolytic pathway was highly conserved. Likewise, pentose phosphate pathway was well conserved in contradiction to TCA cycle, which was found to be incomplete in majority of actinobacteria. The clustering based on presence and absence of genes of these metabolic pathways clearly revealed that members of different genera shared identical pathways and, therefore, provided an easy method to identify the metabolic similarities/differences between pathogenic and symbiotic organisms. The analyses could identify isoenzymes and some key enzymes that were found to be missing in some pathogenic actinobacteria. The present work defines a simple approach to explore the effluxes in four metabolic pathways within the phylum actinobacteria. The analysis clearly reflects that actinobacteria exhibit diverse routes for metabolizing substrates. The pathway comparison can help in finding the enzymes that can be used as drug targets for pathogens without effecting symbiotic organisms

  1. Simplified High-Power Inverter

    Science.gov (United States)

    Edwards, D. B.; Rippel, W. E.

    1984-01-01

    Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.

  2. Stable crack growth behaviors in welded CT specimens -- finite element analyses and simplified assessments

    International Nuclear Information System (INIS)

    Yagawa, Genki; Yoshimura, Shinobu; Aoki, Shigeru; Kikuchi, Masanori; Arai, Yoshio; Kashima, Koichi; Watanabe, Takayuki; Shimakawa, Takashi

    1993-01-01

    The paper describes stable crack growth behaviors in welded CT specimens made of nuclear pressure vessel A533B class 1 steel, in which initial cracks are placed to be normal to fusion line. At first, using the relations between the load-line displacement (δ) and the crack extension amount (Δa) measured in experiments, the generation phase finite element crack growth analyses are performed, calculating the applied load (P) and various kinds of J-integrals. Next, the simplified crack growth analyses based on the GE/EPRI method and the reference stress method are performed using the same experimental results. Some modification procedures of the two simplified assessment schemes are discussed to make them applicable to inhomogeneous materials. Finally, a neural network approach is proposed to optimize the above modification procedures. 20 refs., 13 figs., 1 tab

  3. Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.

    Science.gov (United States)

    Luo, Yao; Wang, Ling

    2017-11-16

    The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  5. A simplified approach to design for assembly

    DEFF Research Database (Denmark)

    Moultrie, James; Maier, Anja

    2014-01-01

    The basic principles of design for assembly (DfA) are well established. This paper presents a short review of the development of DfA approaches before presenting a new tool in which these principles are packaged for use in teams, both in an industrial and an educational context. The fundamental...... consideration in the design of this tool is to encourage wide team participation from across an organisation and is thus physical rather than software-based. This tool builds on the process developed by Appleton whilst at the University of Cambridge. In addition to the traditional analysis of component fitting...

  6. Wavelets-Computational Aspects of Sterian Realistic Approach to Uncertainty Principle in High Energy Physics: A Transient Approach

    Directory of Open Access Journals (Sweden)

    Cristian Toma

    2013-01-01

    Full Text Available This study presents wavelets-computational aspects of Sterian-realistic approach to uncertainty principle in high energy physics. According to this approach, one cannot make a device for the simultaneous measuring of the canonical conjugate variables in reciprocal Fourier spaces. However, such aspects regarding the use of conjugate Fourier spaces can be also noticed in quantum field theory, where the position representation of a quantum wave is replaced by momentum representation before computing the interaction in a certain point of space, at a certain moment of time. For this reason, certain properties regarding the switch from one representation to another in these conjugate Fourier spaces should be established. It is shown that the best results can be obtained using wavelets aspects and support macroscopic functions for computing (i wave-train nonlinear relativistic transformation, (ii reflection/refraction with a constant shift, (iii diffraction considered as interaction with a null phase shift without annihilation of associated wave, (iv deflection by external electromagnetic fields without phase loss, and (v annihilation of associated wave-train through fast and spatially extended phenomena according to uncertainty principle.

  7. Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  8. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  9. A simplified method for evaluating thermal performance of unglazed transpired solar collectors under steady state

    International Nuclear Information System (INIS)

    Wang, Xiaoliang; Lei, Bo; Bi, Haiquan; Yu, Tao

    2017-01-01

    Highlights: • A simplified method for evaluating thermal performance of UTC is developed. • Experiments, numerical simulations, dimensional analysis and data fitting are used. • The correlation of absorber plate temperature for UTC is established. • The empirical correlation of heat exchange effectiveness for UTC is proposed. - Abstract: Due to the advantages of low investment and high energy efficiency, unglazed transpired solar collectors (UTC) have been widely used for heating in buildings. However, it is difficult for designers to quickly evaluate the thermal performance of UTC based on the conventional methods such as experiments and numerical simulations. Therefore, a simple and fast method to determine the thermal performance of UTC is indispensable. The objective of this work is to provide a simplified calculation method to easily evaluate the thermal performance of UTC under steady state. Different parameters are considered in the simplified method, including pitch, perforation diameter, solar radiation, solar absorptivity, approach velocity, ambient air temperature, absorber plate temperature, and so on. Based on existing design parameters and operating conditions, correlations for the absorber plate temperature and the heat exchange effectiveness are developed using dimensional analysis and data fitting, respectively. Results show that the proposed simplified method has a high accuracy and can be employed to evaluate the collector efficiency, the heat exchange effectiveness and the air temperature rise. The proposed method in this paper is beneficial to directly determine design parameters and operating status for UTC.

  10. Simplified pipe gun

    International Nuclear Information System (INIS)

    Sorensen, H.; Nordskov, A.; Sass, B.; Visler, T.

    1987-01-01

    A simplified version of a deuterium pellet gun based on the pipe gun principle is described. The pipe gun is made from a continuous tube of stainless steel and gas is fed in from the muzzle end only. It is indicated that the pellet length is determined by the temperature gradient along the barrel right outside the freezing cell. Velocities of around 1000 m/s with a scatter of +- 2% are obtained with a propellant gas pressure of 40 bar

  11. The Numerical Welding Simulation - Developments and Validation of Simplified and Bead Lumping Methods

    International Nuclear Information System (INIS)

    Baup, Olivier

    2001-01-01

    The aim of this work was to study the TIG multipass welding process on stainless steel, by means of numerical methods and then to work out simplified and bead lumping methods in order to reduce adjusting and realisation times of these calculations. A simulation was used as reference for the validation of these methods; after the presentation of the test series having led to the option choices of this calculation (2D generalised plane strains, elastoplastic model with an isotropic hardening, hardening restoration due to high temperatures), various simplifications were tried on a plate geometry. These simplifications related various modelling points with a correct plastic flow representation in the plate. The use of a reduced number of thermal fields characterising the bead deposit and a low number of tensile curves allow to obtain interesting results, decreasing significantly the Computing times. In addition various lumping bead methods have been studied and concerning both the shape and the thermic of the macro-deposits. The macro-deposit shapes studied are in 'L', or in layer or they represent two beads one on top of the other. Among these three methods, only those using a few number of lumping beads gave bad results since thermo-mechanical history was deeply modified near and inside the weld. Thereafter, simplified methods have been applied to a tubular geometry. On this new geometry, experimental measurements were made during welding, which allow a validation of the reference calculation. Simplified and reference calculations gave approximately the same stress fields as found on plate geometry. Finally, in the last part of this document a procedure for automatic data setting permitting to reduce significantly the calculation phase preparation is presented. It has been applied to the calculation of thick pipe welding in 90 beads; the results are compared with a simplified simulation realised by Framatome and with experimental measurements. A bead by

  12. Simplifying the Reinsch algorithm for the Baker-Campbell-Hausdorff series

    Science.gov (United States)

    Van-Brunt, Alexander; Visser, Matt

    2016-02-01

    The Goldberg version of the Baker-Campbell-Hausdorff series computes the quantity Z ( X , Y ) = ln (" separators=" e X e Y ) = ∑ w g ( w ) w ( X , Y ) , where X and Y are not necessarily commuting in terms of "words" constructed from the {X, Y} "alphabet." The so-called Goldberg coefficients g(w) are the central topic of this article. This Baker-Campbell-Hausdorff series is a general purpose tool of very wide applicability in mathematical physics, quantum physics, and many other fields. The Reinsch algorithm for the truncated series permits one to calculate the Goldberg coefficients up to some fixed word length |w| by using nilpotent (|w| + 1) × (|w| + 1) matrices. We shall show how to further simplify the Reinsch algorithm, making its implementation (in principle) utterly straightforward using "off the shelf" symbolic manipulation software. Specific computations provide examples which help to provide a deeper understanding of the Goldberg coefficients and their properties. For instance, we shall establish some strict bounds (and some equalities) on the number of non-zero Goldberg coefficients. Unfortunately, we shall see that the number of nonzero Goldberg coefficients often grows very rapidly (in fact exponentially) with the word length |w|. Furthermore, the simplified Reinsch algorithm readily generalizes to many closely related but still quite distinct problems—we shall also present closely related results for the symmetric product S ( X , Y ) = ln (" separators=" e X / 2 e Y e X / 2 ) = ∑ w g S ( w ) w ( X , Y ) . Variations on such themes are straightforward. For instance, one can just as easily consider the "loop" product L ( X , Y ) = ln (" separators=" e X e Y e - X e - Y ) = ∑ w g L ( w ) w ( X , Y ) . This "loop" type of series is of interest, for instance, when considering either differential geometric parallel transport around a closed curve, non-Abelian versions of Stokes' theorem, or even Wigner rotation/Thomas precession in special

  13. Combinatorial computational chemistry approach to the design of metal catalysts for deNOx

    International Nuclear Information System (INIS)

    Endou, Akira; Jung, Changho; Kusagaya, Tomonori; Kubo, Momoji; Selvam, Parasuraman; Miyamoto, Akira

    2004-01-01

    Combinatorial chemistry is an efficient technique for the synthesis and screening of a large number of compounds. Recently, we introduced the combinatorial approach to computational chemistry for catalyst design and proposed a new method called ''combinatorial computational chemistry''. In the present study, we have applied this combinatorial computational chemistry approach to the design of precious metal catalysts for deNO x . As the first step of the screening of the metal catalysts, we studied Rh, Pd, Ag, Ir, Pt, and Au clusters regarding the adsorption properties towards NO molecule. It was demonstrated that the energetically most stable adsorption state of NO on Ir model cluster, which was irrespective of both the shape and number of atoms including the model clusters

  14. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  15. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Directory of Open Access Journals (Sweden)

    Yaodong Xing

    2012-08-01

    Full Text Available Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can’t be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  16. Simplified Models for Dark Matter Searches at the LHC

    CERN Document Server

    Abdallah, Jalal; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; de Jong, Paul; De Roeck, Albert; de Vries, Kees; del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M.P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-01-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediation is discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  17. Simplified models for dark matter searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document a outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  18. High fidelity thermal-hydraulic analysis using CFD and massively parallel computers

    International Nuclear Information System (INIS)

    Weber, D.P.; Wei, T.Y.C.; Brewster, R.A.; Rock, Daniel T.; Rizwan-uddin

    2000-01-01

    Thermal-hydraulic analyses play an important role in design and reload analysis of nuclear power plants. These analyses have historically relied on early generation computational fluid dynamics capabilities, originally developed in the 1960s and 1970s. Over the last twenty years, however, dramatic improvements in both computational fluid dynamics codes in the commercial sector and in computing power have taken place. These developments offer the possibility of performing large scale, high fidelity, core thermal hydraulics analysis. Such analyses will allow a determination of the conservatism employed in traditional design approaches and possibly justify the operation of nuclear power systems at higher powers without compromising safety margins. The objective of this work is to demonstrate such a large scale analysis approach using a state of the art CFD code, STAR-CD, and the computing power of massively parallel computers, provided by IBM. A high fidelity representation of a current generation PWR was analyzed with the STAR-CD CFD code and the results were compared to traditional analyses based on the VIPRE code. Current design methodology typically involves a simplified representation of the assemblies, where a single average pin is used in each assembly to determine the hot assembly from a whole core analysis. After determining this assembly, increased refinement is used in the hot assembly, and possibly some of its neighbors, to refine the analysis for purposes of calculating DNBR. This latter calculation is performed with sub-channel codes such as VIPRE. The modeling simplifications that are used involve the approximate treatment of surrounding assemblies and coarse representation of the hot assembly, where the subchannel is the lowest level of discretization. In the high fidelity analysis performed in this study, both restrictions have been removed. Within the hot assembly, several hundred thousand to several million computational zones have been used, to

  19. The challenge of forecasting impacts of flash floods: test of a simplified hydraulic approach and validation based on insurance claim data

    Science.gov (United States)

    Le Bihan, Guillaume; Payrastre, Olivier; Gaume, Eric; Moncoulon, David; Pons, Frédéric

    2017-11-01

    Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall-runoff models, generally aimed at estimating flood magnitudes - typically discharges or return periods - at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies). The proposed approach includes, in addition to a distributed rainfall-runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.

  20. A computational approach to negative priming

    Science.gov (United States)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  1. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  2. Pedagogical Approaches to Teaching with Computer Simulations in Science Education

    NARCIS (Netherlands)

    Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael

    2013-01-01

    For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is

  3. Novel computational approaches characterizing knee physiotherapy

    Directory of Open Access Journals (Sweden)

    Wangdo Kim

    2014-01-01

    Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.

  4. Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach

    Directory of Open Access Journals (Sweden)

    Grover Kearns

    2010-06-01

    Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.

  5. Simplified design and evaluation of liquid storage tanks relative to earthquake loading

    Energy Technology Data Exchange (ETDEWEB)

    Poole, A.B.

    1994-06-01

    A summary of earthquake-induced damage in liquid storage tanks is provided. The general analysis steps for dynamic response of fluid-filled tanks subject to horizontal ground excitation are discussed. This work will provide major attention to the understanding of observed tank-failure modes. These modes are quite diverse in nature, but many of the commonly appearing patterns are believed to be shell buckling. A generalized and simple-to-apply shell loading will be developed using Fluegge shell theory. The input to this simplified analysis will be horizontal ground acceleration and tank shell form parameters. A dimensionless parameter will be developed and used in predictions of buckling resulting from earthquake-imposed loads. This prediction method will be applied to various tank designs that have failed during major earthquakes and during shaker table tests. Tanks that have not failed will also be reviewed. A simplified approach will be discussed for early design and evaluation of tank shell parameters and materials to provide a high confidence of low probability of failure during earthquakes.

  6. Design and performance analysis of solid-propellant rocket motors using a simplified computer program

    Science.gov (United States)

    Sforzini, R. H.

    1972-01-01

    An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.

  7. Rotina computacional e equação simplificada para modelar o transporte de sedimentos num Latossolo Vermelho Distrófico Computational routine and simplified equation for modeling sediment transport capacity in a Dystrophic Hapludox

    Directory of Open Access Journals (Sweden)

    Gilmar E. Cerquetani

    2006-08-01

    Full Text Available Os objetivos do presente trabalho foram desenvolver rotina computacional para a solução da equação de Yalin e do diagrama de Shields e avaliar uma equação simplificada para modelar a capacidade de transporte de sedimento num Latossolo Vermelho Distrófico que possa ser utilizada no Water Erosion Prediction Project - WEPP, assim como em outros modelos de predição da erosão do solo. A capacidade de transporte de sedimento para o fluxo superficial foi representada como função-potência da tensão cisalhante, a qual revelou ser aproximação da equação de Yalin. Essa equação simplificada pôde ser aplicada em resultados experimentais oriundos de topografia complexa. A equação simplificada demonstrou acuracidade em relação à equação de Yalin, quando calibrada utilizando-se da tensão média cisalhante. Testes de validação com dados independentes demonstraram que a equação simplificada foi eficiente para estimar a capacidade de transporte de sedimento.The objectives of the present work were to develop a computational routine to solve Yalin equation and Shield diagram and to evaluate a simplified equation for modeling sediment transport capacity in a Dystrophic Hapludox that could be used in the Water Erosion Prediction Project - WEPP, as well as other soil erosion models. Sediment transport capacity for shallow overland flow was represented as a power function of the hydraulic shear stress and which showed to be an approximation to the Yalin equation for sediment transport capacity. The simplified equation for sediment transport could be applied to experimental data from a complex topography. The simplified equation accurately approximated the Yalin equation when calibrated using the mean hydraulic shear stress. Validation tests using independent data showed that the simplified equation had a good performance in predicting sediment transport capacity.

  8. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  9. Computational approach to Riemann surfaces

    CERN Document Server

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  10. GPU-based local interaction simulation approach for simplified temperature effect modelling in Lamb wave propagation used for damage detection

    International Nuclear Information System (INIS)

    Kijanka, P; Radecki, R; Packo, P; Staszewski, W J; Uhl, T

    2013-01-01

    Temperature has a significant effect on Lamb wave propagation. It is important to compensate for this effect when the method is considered for structural damage detection. The paper explores a newly proposed, very efficient numerical simulation tool for Lamb wave propagation modelling in aluminum plates exposed to temperature changes. A local interaction approach implemented with a parallel computing architecture and graphics cards is used for these numerical simulations. The numerical results are compared with the experimental data. The results demonstrate that the proposed approach could be used efficiently to produce a large database required for the development of various temperature compensation procedures in structural health monitoring applications. (paper)

  11. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  12. Development of technologies on innovative-simplified nuclear power plant using high-efficiency steam injectors. (2) Analysis of heat balance of innovative-simplified nuclear power plant

    International Nuclear Information System (INIS)

    Goto, Shoji; Ohmori, Shuichi; Mori, Mitchitsugu

    2004-01-01

    It is possible to established simplified systems and reduced space and equipments using high-efficiency Steam Injector (SI) instead of low-pressure feed water heaters in Nuclear Power Plant (NPP). The SI works as a heat exchanger through direct contact between feedwater from condenser and extracted steam from turbine. It can get a higher pressure than supplied steam pressure, so it can reduce the feedwater pumps. The maintenance and reliability are still higher because SI has no movable parts. This paper describes the analysis of the heat balance and plant efficiency of this Innovative-Simplified NPP with high-efficiency SI. The plant efficiency is compared with the electric power of 1100MWe class original BWR system and the Innovative-Simplified BWR system with SI. The SI model is adapted into the heat balance simulator with a simplified model. The results show plant efficiencies of the Innovated-Simplified BWR system are almost equal to the original BWR one. The present research is one of the projects that are carried out by Tokyo Electric Power Company, Toshiba Corporation, and six Universities in Japan, funded from the Institute of Applied Energy (IAE) of Japan as the national public research-funded program. (author)

  13. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    Science.gov (United States)

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  14. The challenge of forecasting impacts of flash floods: test of a simplified hydraulic approach and validation based on insurance claim data

    Directory of Open Access Journals (Sweden)

    G. Le Bihan

    2017-11-01

    Full Text Available Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall–runoff models, generally aimed at estimating flood magnitudes – typically discharges or return periods – at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies. The proposed approach includes, in addition to a distributed rainfall–runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.

  15. Computational Approach for Studying Optical Properties of DNA Systems in Solution

    DEFF Research Database (Denmark)

    Nørby, Morten Steen; Svendsen, Casper Steinmann; Olsen, Jógvan Magnus Haugaard

    2016-01-01

    In this paper we present a study of the methodological aspects regarding calculations of optical properties for DNA systems in solution. Our computational approach will be built upon a fully polarizable QM/MM/Continuum model within a damped linear response theory framework. In this approach...... the environment is given a highly advanced description in terms of the electrostatic potential through the polarizable embedding model. Furthermore, bulk solvent effects are included in an efficient manner through a conductor-like screening model. With the aim of reducing the computational cost we develop a set...... of averaged partial charges and distributed isotropic dipole-dipole polarizabilities for DNA suitable for describing the classical region in ground-state and excited-state calculations. Calculations of the UV-spectrum of the 2-aminopurine optical probe embedded in a DNA double helical structure are presented...

  16. New Approaches to the Computer Simulation of Amorphous Alloys: A Review.

    Science.gov (United States)

    Valladares, Ariel A; Díaz-Celaya, Juan A; Galván-Colín, Jonathan; Mejía-Mendoza, Luis M; Reyes-Retana, José A; Valladares, Renela M; Valladares, Alexander; Alvarez-Ramirez, Fernando; Qu, Dongdong; Shen, Jun

    2011-04-13

    In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe 2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus) temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total) are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties.

  17. Nonadiabatic holonomic quantum computation using Rydberg blockade

    Science.gov (United States)

    Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan

    2018-04-01

    In this paper, we propose a scheme for realizing nonadiabatic holonomic computation assisted by two atoms and the shortcuts to adiabaticity (STA). The blockade effect induced by strong Rydberg-mediated interaction between two Rydberg atoms provides us the possibility to simplify the dynamics of the system, and the STA helps us design pulses for implementing the holonomic computation with high fidelity. Numerical simulations show the scheme is noise immune and decoherence resistant. Therefore, the current scheme may provide some useful perspectives for realizing nonadiabatic holonomic computation.

  18. The simplified spherical harmonics (SPL) methodology with space and moment decomposition in parallel environments

    International Nuclear Information System (INIS)

    Gianluca, Longoni; Alireza, Haghighat

    2003-01-01

    In recent years, the SP L (simplified spherical harmonics) equations have received renewed interest for the simulation of nuclear systems. We have derived the SP L equations starting from the even-parity form of the S N equations. The SP L equations form a system of (L+1)/2 second order partial differential equations that can be solved with standard iterative techniques such as the Conjugate Gradient (CG). We discretized the SP L equations with the finite-volume approach in a 3-D Cartesian space. We developed a new 3-D general code, Pensp L (Parallel Environment Neutral-particle SP L ). Pensp L solves both fixed source and criticality eigenvalue problems. In order to optimize the memory management, we implemented a Compressed Diagonal Storage (CDS) to store the SP L matrices. Pensp L includes parallel algorithms for space and moment domain decomposition. The computational load is distributed on different processors, using a mapping function, which maps the 3-D Cartesian space and moments onto processors. The code is written in Fortran 90 using the Message Passing Interface (MPI) libraries for the parallel implementation of the algorithm. The code has been tested on the Pcpen cluster and the parallel performance has been assessed in terms of speed-up and parallel efficiency. (author)

  19. GriF: A Grid framework for a Web Service approach to reactive scattering

    Science.gov (United States)

    Manuali, C.; Laganà, A.; Rampino, S.

    2010-07-01

    Grid empowered calculations are becoming an important advanced tool indispensable for scientific advances. The possibility of simplifying and harmonizing the work carried out by computational scientists using a Web Service approach is considered here. To this end, a new Collaborative Grid Framework has been developed and tested. As a study case a three dimensional reactive scattering code dealing with atom-diatom systems has been considered. To this end an extended study of the energy dependence of the electronically adiabatic reactivity of N+N has been performed on the EGEE Grid.

  20. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  1. The Simulation of an Oxidation-Reduction Titration Curve with Computer Algebra

    Science.gov (United States)

    Whiteley, Richard V., Jr.

    2015-01-01

    Although the simulation of an oxidation/reduction titration curve is an important exercise in an undergraduate course in quantitative analysis, that exercise is frequently simplified to accommodate computational limitations. With the use of readily available computer algebra systems, however, such curves for complicated systems can be generated…

  2. Optical chirp z-transform processor with a simplified architecture.

    Science.gov (United States)

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  3. Global existence of periodic solutions on a simplified BAM neural network model with delays

    International Nuclear Information System (INIS)

    Zheng Baodong; Zhang Yazhuo; Zhang Chunrui

    2008-01-01

    A simplified n-dimensional BAM neural network model with delays is considered. Some results of Hopf bifurcations occurring at the zero equilibrium as the delay increases are exhibited. Global existence of periodic solutions are established using a global Hopf bifurcation result of Wu [Wu J. Symmetric functional-differential equations and neural networks with memory. Trans Am Math Soc 1998;350:4799-838], and a Bendixson criterion for higher dimensional ordinary differential equations due to Li and Muldowney [Li MY, Muldowney J. On Bendixson's criterion. J Differ Equations 1994;106:27-39]. Finally, computer simulations are performed to illustrate the analytical results found

  4. Simplified computational simulation of liquid metal behaviour in turbulent flow with heat transfer

    International Nuclear Information System (INIS)

    Costa, E.B. da.

    1992-09-01

    The present work selected the available bibliography equations and empirical relationships to the development of a computer code to obtain the turbulent velocity and temperature profiles in liquid metal tube flow with heat generation. The computer code is applied to a standard problem and the results are considered satisfactory, at least from the viewpoint of qualitative behaviour. (author). 50 refs, 21 figs, 3 tabs

  5. Proliferation and the Civilian Nuclear Fuel Cycle. Towards a Simplified Recipe to Measure Proliferation Risk

    Energy Technology Data Exchange (ETDEWEB)

    Brogli, R.; Krakowski, R.A

    2001-08-01

    The primary goal of this study is to frame the problem of nuclear proliferation in the context of protection and risks associated with nuclear materials flowing in the civilian nuclear fuel cycle. The perspective adopted for this study is that of a nuclear utility and the flow of fresh and spent nuclear fuel with which that utility must deal in the course of providing economic, safe, and ecologically acceptable electrical power to the public. Within this framework quantitative approaches to a material-dependent, simplified proliferation-risk metric are identified and explored. The driving force behind this search for such a proliferation metric derives from the need to quantify the proliferation risk in the context of evaluating various commercial nuclear fuel cycle options (e.g., plutonium recycle versus once-through). While the formulation of the algebra needed to describe the desired, simplified metric(s) should be straight forward once a modus operandi is defined, considerable interaction with the user of any final product that results is essential. Additionally, a broad contextual review of the proliferation problem and past efforts in the quantification of associated risks was developed as part of this study. This extensive review was essential to setting perspectives and establishing (feasibility) limits to the search for a proliferation metric(s) that meets the goals of this study. Past analyses of proliferation risks associated with the commercial nuclear fuel cycle have generally been based on a range of decision-analysis, operations-research tools. Within the time and budget constraints, as well as the self-enforced (utility) customer focus, the more subjective and data-intensive decision-analysis methodologies where not pursued. Three simplified, less-subjective approaches were investigated instead: a) a simplified 'four-factor' formula expressing as a normalized product political, material-quantity, material-quality, and material

  6. Proliferation and the Civilian Nuclear Fuel Cycle. Towards a Simplified Recipe to Measure Proliferation Risk

    International Nuclear Information System (INIS)

    Brogli, R.; Krakowski, R.A.

    2001-08-01

    The primary goal of this study is to frame the problem of nuclear proliferation in the context of protection and risks associated with nuclear materials flowing in the civilian nuclear fuel cycle. The perspective adopted for this study is that of a nuclear utility and the flow of fresh and spent nuclear fuel with which that utility must deal in the course of providing economic, safe, and ecologically acceptable electrical power to the public. Within this framework quantitative approaches to a material-dependent, simplified proliferation-risk metric are identified and explored. The driving force behind this search for such a proliferation metric derives from the need to quantify the proliferation risk in the context of evaluating various commercial nuclear fuel cycle options (e.g., plutonium recycle versus once-through). While the formulation of the algebra needed to describe the desired, simplified metric(s) should be straight forward once a modus operandi is defined, considerable interaction with the user of any final product that results is essential. Additionally, a broad contextual review of the proliferation problem and past efforts in the quantification of associated risks was developed as part of this study. This extensive review was essential to setting perspectives and establishing (feasibility) limits to the search for a proliferation metric(s) that meets the goals of this study. Past analyses of proliferation risks associated with the commercial nuclear fuel cycle have generally been based on a range of decision-analysis, operations-research tools. Within the time and budget constraints, as well as the self-enforced (utility) customer focus, the more subjective and data-intensive decision-analysis methodologies where not pursued. Three simplified, less-subjective approaches were investigated instead: a) a simplified 'four-factor' formula expressing as a normalized product political, material-quantity, material-quality, and material-protection metrics; b

  7. Simplifying Multiproject Scheduling Problem Based on Design Structure Matrix and Its Solution by an Improved aiNet Algorithm

    Directory of Open Access Journals (Sweden)

    Chunhua Ju

    2012-01-01

    Full Text Available Managing multiple project is a complex task involving the unrelenting pressures of time and cost. Many studies have proposed various tools and techniques for single-project scheduling; however, the literature further considering multimode or multiproject issues occurring in the real world is rather scarce. In this paper, design structure matrix (DSM and an improved artificial immune network algorithm (aiNet are developed to solve a multi-mode resource-constrained scheduling problem. Firstly, the DSM is used to simplify the mathematic model of multi-project scheduling problem. Subsequently, aiNet algorithm comprised of clonal selection, negative selection, and network suppression is adopted to realize the local searching and global searching, which will assure that it has a powerful searching ability and also avoids the possible combinatorial explosion. Finally, the approach is tested on a set of randomly cases generated from ProGen. The computational results validate the effectiveness of the proposed algorithm comparing with other famous metaheuristic algorithms such as genetic algorithm (GA, simulated annealing algorithm (SA, and ant colony optimization (ACO.

  8. Simplified Method for Rapid Purification of Soluble Histones

    Directory of Open Access Journals (Sweden)

    Nives Ivić

    2016-06-01

    Full Text Available Functional and structural studies of histone-chaperone complexes, nucleosome modifications, their interactions with remodelers and regulatory proteins rely on obtaining recombinant histones from bacteria. In the present study, we show that co-expression of Xenopus laevis histone pairs leads to production of soluble H2AH2B heterodimer and (H3H42 heterotetramer. The soluble histone complexes are purified by simple chromatographic techniques. Obtained H2AH2B dimer and H3H4 tetramer are proficient in histone chaperone binding and histone octamer and nucleosome formation. Our optimized protocol enables rapid purification of multiple soluble histone variants with a remarkable high yield and simplifies histone octamer preparation. We expect that this simple approach will contribute to the histone chaperone and chromatin research. This work is licensed under a Creative Commons Attribution 4.0 International License.

  9. A simplified study of trans-mitral Doppler patterns

    Directory of Open Access Journals (Sweden)

    Thomas George

    2008-11-01

    Full Text Available Abstract Background Trans-mitral Doppler produces complex patterns with a great deal of variability. There are several confusing numerical measures and indices to study these patterns. However trans-mitral Doppler produces readymade data visualization by pattern generation which could be interpreted by pattern analysis. By following a systematic approach we could create an order and use this tool to study cardiac function. Presentation of the hypothesis In this new approach we eliminate the variables and apply pattern recognition as the main criterion of study. Proper terminologies are also devised to avoid confusion. In this way we can get some meaningful information. Testing the hypothesis Trans-mitral Doppler should be seen as patterns rather than the amplitude. The hypothesis can be proven by logical deduction, extrapolation and elimination of variables. Trans-mitral flow is also analyzed vis-à-vis the Starling's Law applied to the left atrium. Implications of the hypothesis Trans-mitral Doppler patterns are not just useful for evaluating diastolic function. They are also useful to evaluate systolic function. By following this schema we could get useful diagnostic information and therapeutic options using simple pattern recognition with minimal measurements. This simplified but practical approach will be useful in day to day clinical practice and help in understanding cardiac function better. This will also standardize research and improve communication.

  10. A statistical state dynamics approach to wall turbulence.

    Science.gov (United States)

    Farrell, B F; Gayme, D F; Ioannou, P J

    2017-03-13

    This paper reviews results obtained using statistical state dynamics (SSD) that demonstrate the benefits of adopting this perspective for understanding turbulence in wall-bounded shear flows. The SSD approach used in this work employs a second-order closure that retains only the interaction between the streamwise mean flow and the streamwise mean perturbation covariance. This closure restricts nonlinearity in the SSD to that explicitly retained in the streamwise constant mean flow together with nonlinear interactions between the mean flow and the perturbation covariance. This dynamical restriction, in which explicit perturbation-perturbation nonlinearity is removed from the perturbation equation, results in a simplified dynamics referred to as the restricted nonlinear (RNL) dynamics. RNL systems, in which a finite ensemble of realizations of the perturbation equation share the same mean flow, provide tractable approximations to the SSD, which is equivalent to an infinite ensemble RNL system. This infinite ensemble system, referred to as the stochastic structural stability theory system, introduces new analysis tools for studying turbulence. RNL systems provide computationally efficient means to approximate the SSD and produce self-sustaining turbulence exhibiting qualitative features similar to those observed in direct numerical simulations despite greatly simplified dynamics. The results presented show that RNL turbulence can be supported by as few as a single streamwise varying component interacting with the streamwise constant mean flow and that judicious selection of this truncated support or 'band-limiting' can be used to improve quantitative accuracy of RNL turbulence. These results suggest that the SSD approach provides new analytical and computational tools that allow new insights into wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  11. Simplified elastoplastic fatigue analysis

    International Nuclear Information System (INIS)

    Autrusson, B.; Acker, D.; Hoffmann, A.

    1987-01-01

    Oligocyclic fatigue behaviour is a function of the local strain range. The design codes ASME section III, RCC-M, Code Case N47, RCC-MR, and the Guide issued by PNC propose simplified methods to evaluate the local strain range. After having briefly described these simplified methods, we tested them by comparing the results of experimental strains with those predicted by these rules. The experiments conducted for this study involved perforated plates under tensile stress, notched or reinforced beams under four-point bending stress, grooved specimens under tensile-compressive stress, and embedded grooved beams under bending stress. They display a relative conservatism depending on each case. The evaluation of the strains of rather inaccurate and sometimes lacks conservatism. So far, the proposal is to use the finite element codes with a simple model. The isotropic model with the cyclic consolidation curve offers a good representation of the real equivalent strain. There is obviously no question of representing the cycles and the entire loading history, but merely of calculating the maximum variation in elastoplastic equivalent deformations with a constant-rate loading. The results presented testify to the good prediction of the strains with this model. The maximum equivalent strain will be employed to evaluate fatigue damage

  12. Some Mathematical Structures Including Simplified Non-Relativistic Quantum Teleportation Equations and Special Relativity

    International Nuclear Information System (INIS)

    Woesler, Richard

    2007-01-01

    The computations of the present text with non-relativistic quantum teleportation equations and special relativity are totally speculative, physically correct computations can be done using quantum field theory, which remain to be done in future. Proposals for what might be called statistical time loop experiments with, e.g., photon polarization states are described when assuming the simplified non-relativistic quantum teleportation equations and special relativity. However, a closed time loop would usually not occur due to phase incompatibilities of the quantum states. Histories with such phase incompatibilities are called inconsistent ones in the present text, and it is assumed that only consistent histories would occur. This is called an exclusion principle for inconsistent histories, and it would yield that probabilities for certain measurement results change. Extended multiple parallel experiments are proposed to use this statistically for transmission of classical information over distances, and regarding time. Experiments might be testable in near future. However, first a deeper analysis, including quantum field theory, remains to be done in future

  13. Assessment of a simplified set of momentum closure relations for low volume fraction regimes in STAR-CCM+ and OpenFOAM

    International Nuclear Information System (INIS)

    Sugrue, Rosemary; Magolan, Ben; Lubchenko, Nazar; Baglietto, Emilio

    2017-01-01

    Highlights: •A simplified set of momentum closures – Bubbly And Moderate void Fraction (BAMF) – is proposed. •BAMF model is assessed by simulation of 12 cases from the Liu and Bankoff experimental database. •Portability between STAR-CCM+ and OpenFOAM CFD softwares is demonstrated. •Both CFD softwares yield mean flow predictions in close agreement with experimental results. -- Abstract: Multiphase computational fluid dynamics (M-CFD) modeling approaches provide three-dimensional resolution of complex two-phase flow and boiling heat transfer phenomena, which makes them an invaluable tool for nuclear reactor design applications. By virtue of the Eulerian-Eulerian spatial and temporal averaging framework, additional terms manifest in the phase momentum equations that require closure through prescription of interfacial forces in the stream-wise and lateral flow directions, as well as in the near-wall region. These momentum closures are critical to M-CFD prediction of mean flow profiles, including velocity and volume fraction distributions, and yet while an overwhelming number of them has been developed, no consensus exists on how to assemble them to achieve a simplified set of closures that is numerically robust and extensible to a wide array of flow configurations; further, no consistent demonstration has been shown of the cross-code portability of these closures between CFD softwares. To address these challenges, we propose in this work a simplified set of momentum closures for stream-wise drag and lateral redistribution mechanisms—collectively referred to as the Bubbly And Moderate void Fraction (BAMF) model—and assess its performance by simulation of 12 cases from the Liu and Bankoff experimental database using STAR-CCM+ and OpenFOAM. Both CFD softwares yield mean flow predictions that are in close agreement with the experimental results, and also in close agreement with each other. These results confirm the effectiveness of the BAMF model and its

  14. Development of technologies on innovative-simplified nuclear power plant using high-efficiency steam injectors (2) analysis of heat balance of innovative-simplified nuclear power plant

    International Nuclear Information System (INIS)

    Goto, S.; Ohmori, S.; Mori, M.

    2005-01-01

    It is possible to establish simplified system with reduced space and total equipment weight using high-efficiency Steam Injector (SI) instead of low-pressure feedwater heaters in Nuclear Power Plant (NPP)(1)-(6). The SI works as a heat exchanger through direct contact between feedwater from the condensers and extracted steam from the turbines. It can get a higher pressure than supplied steam pressure, so it can reduce the feedwater pumps. The maintenance and reliability are still higher because SI has no movable parts. This paper describes the analysis of the heat balance and plant efficiency of this Innovative- Simplified NPP with high-efficiency SI. The plant efficiency is compared with the electric power of 1100MWe-class BWR system and the Innovative- Simplified BWR system with SI. The SI model is adapted into the heat balance simulator with a simplified model. The results show plant efficiencies of the Innovated-Simplified BWR system are almost equal to the original BWR one. The present research is one of the projects that are carried out by Tokyo Electric Power Company, Toshiba Corporation, and six Universities in Japan, funded from the Institute of Applied Energy (IAE) of Japan as the national public research-funded program. (authors)

  15. Novel computational approaches for the analysis of cosmic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)

    2016-07-01

    In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.

  16. Simplified ejector model for control and optimization

    International Nuclear Information System (INIS)

    Zhu Yinhai; Cai Wenjian; Wen Changyun; Li Yanzhong

    2008-01-01

    In this paper, a simple yet effective ejector model for a real time control and optimization of an ejector system is proposed. Firstly, a fundamental model for calculation of ejector entrainment ratio at critical working conditions is derived by one-dimensional analysis and the shock circle model. Then, based on thermodynamic principles and the lumped parameter method, the fundamental ejector model is simplified to result in a hybrid ejector model. The model is very simple, which only requires two or three parameters and measurement of two variables to determine the ejector performance. Furthermore, the procedures for on line identification of the model parameters using linear and non-linear least squares methods are also presented. Compared with existing ejector models, the solution of the proposed model is much easier without coupled equations and iterative computations. Finally, the effectiveness of the proposed model is validated by published experimental data. Results show that the model is accurate and robust and gives a better match to the real performances of ejectors over the entire operating range than the existing models. This model is expected to have wide applications in real time control and optimization of ejector systems

  17. Simplified Chua's attractor via bridging a diode pair

    Directory of Open Access Journals (Sweden)

    Quan Xu

    2015-04-01

    Full Text Available In this paper, a simplified Chua's circuit is realised by bridging a diode pair between a passive LC (inductance and capacitance in parallel connection - LC oscillator and an active RC (resistance and capacitance in parallel connection - RC filter. The dynamical behaviours of the circuit are investigated by numerical simulations and verified by experimental measurements. It is found that the simplified Chua's circuit generates Chua's attractors similarly and demonstrates complex non-linear phenomena including coexisting bifurcation modes and coexisting attractors in particular.

  18. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Directory of Open Access Journals (Sweden)

    Corrado Lodovico Galli

    Full Text Available Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs and their metabolites towards the ligand binding domain (LBD of the androgen receptor (AR in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three. This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

  19. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Science.gov (United States)

    Galli, Corrado Lodovico; Sensi, Cristina; Fumagalli, Amos; Parravicini, Chiara; Marinovich, Marina; Eberini, Ivano

    2014-01-01

    Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

  20. Fast reactor safety and computational thermo-fluid dynamics approaches

    International Nuclear Information System (INIS)

    Ninokata, Hisashi; Shimizu, Takeshi

    1993-01-01

    This article provides a brief description of the safety principle on which liquid metal cooled fast breeder reactors (LMFBRs) is based and the roles of computations in the safety practices. A number of thermohydraulics models have been developed to date that successfully describe several of the important types of fluids and materials motion encountered in the analysis of postulated accidents in LMFBRs. Most of these models use a mixture of implicit and explicit numerical solution techniques in solving a set of conservation equations formulated in Eulerian coordinates, with special techniques included to specific situations. Typical computational thermo-fluid dynamics approaches are discussed in particular areas of analyses of the physical phenomena relevant to the fuel subassembly thermohydraulics design and that involve describing the motion of molten materials in the core over a large scale. (orig.)

  1. Windows 10 simplified

    CERN Document Server

    McFedries, Paul

    2015-01-01

    Learn Windows 10 quickly and painlessly with this beginner's guide Windows 10 Simplified is your absolute beginner's guide to the ins and outs of Windows. Fully updated to cover Windows 10, this highly visual guide covers all the new features in addition to the basics, giving you a one-stop resource for complete Windows 10 mastery. Every page features step-by-step screen shots and plain-English instructions that walk you through everything you need to know, no matter how new you are to Windows. You'll master the basics as you learn how to navigate the user interface, work with files, create

  2. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  3. Structured Assessment Approach: a microcomputer-based insider-vulnerability analysis tool

    International Nuclear Information System (INIS)

    Patenaude, C.J.; Sicherman, A.; Sacks, I.J.

    1986-01-01

    The Structured Assessment Approach (SAA) was developed to help assess the vulnerability of safeguards systems to insiders in a staged manner. For physical security systems, the SAA identifies possible diversion paths which are not safeguarded under various facility operating conditions and insiders who could defeat the system via direct access, collusion or indirect tampering. For material control and accounting systems, the SAA identifies those who could block the detection of a material loss or diversion via data falsification or equipment tampering. The SAA, originally desinged to run on a mainframe computer, has been converted to run on a personal computer. Many features have been added to simplify and facilitate its use for conducting vulnerability analysis. For example, the SAA input, which is a text-like data file, is easily readable and can provide documentation of facility safeguards and assumptions used for the analysis

  4. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    Science.gov (United States)

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  5. Flame retardation of cellulose-rich fabrics via a simplified layer-by-layer assembly.

    Science.gov (United States)

    Yang, Jun-Chi; Liao, Wang; Deng, Shi-Bi; Cao, Zhi-Jie; Wang, Yu-Zhong

    2016-10-20

    Due to the high cellulose content of cotton (88.0-96.5%), the flame retardation of cotton fabrics can be achieved via an approach for the flame retardation of cellulose. In this work, a facile water-based flame retardant coating was deposited on cotton fabrics by a 'simplified' layer-by-layer (LbL) assembly. The novel coating solution was based on a mild reaction between ammonium polyphosphate (APP) and branched polyethyleneimine (BPEI), and the reaction mechanism was studied. TGA results showed that the char residues of coated fabrics were remarkably increased. The fabric with only 5wt% coating showed self-extinguishing in the horizontal flame test, and the peak heat release rate (pHRR) in cone calorimeter test decreased by 51%. Furthermore, this coating overcame a general drawback of flame-retardant LbL assembly which was easily washed away. Therefore, the simplified LbL method provides a fast, low-cost, eco-friendly and wash-durable flame-retardant finishing for the cellulose-rich cotton fabrics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Rhinoplasty: a simplified, three-stitch, open tip suture technique. Part I: primary rhinoplasty.

    Science.gov (United States)

    Daniel, R K

    1999-04-01

    Tip suture techniques offer a reliable and dramatic method of tip modification without needing to interrupt the alar rim strip or add tip grafts. The present simplified three-stitch technique consists of the following: (1) a strut suture to fix the columella strut between the crura, (2) bilateral domal creation sutures to create tip definition, and (3) a domal equalization suture to narrow and align the domes. If required, columella septal sutures can be added; either a dorsal rotational suture or a transfixion projection suture can be used. This simplified method represents a refinement based on more than 13 years of experience with tip suture techniques. It does not require a complex operative sequence or specialized sutures. Primary indications are moderate tip deformities of inadequate definition and excessive width and certain specific tip deformities, including the parenthesis tip and nostril/tip disproportion. The primary contraindications are for patients with minor tip deformities that are best done through a closed approach and those with severe tip deformities requiring an open structure graft. The technique is simple, efficacious, and easily learned.

  7. Analysis of lower head failure with simplified models and a finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Koundy, V. [CEA-IPSN-DPEA-SEAC, Service d' Etudes des Accidents, Fontenay-aux-Roses (France); Nicolas, L. [CEA-DEN-DM2S-SEMT, Service d' Etudes Mecaniques et Thermiques, Gif-sur-Yvette (France); Combescure, A. [INSA-Lyon, Lab. Mecanique des Solides, Villeurbanne (France)

    2001-07-01

    The objective of the OLHF (OECD lower head failure) experiments is to characterize the timing, mode and size of lower head failure under high temperature loading and reactor coolant system pressure due to a postulated core melt scenario. Four tests have been performed at Sandia National Laboratories (USA), in the frame of an OECD project. The experimental results have been used to develop and validate predictive analysis models. Within the framework of this project, several finite element calculations were performed. In parallel, two simplified semi-analytical methods were developed in order to get a better understanding of the role of various parameters on the creep phenomenon, e.g. the behaviour of the lower head material and its geometrical characteristics on the timing, mode and location of failure. Three-dimensional modelling of crack opening and crack propagation has also been carried out using the finite element code Castem 2000. The aim of this paper is to present the two simplified semi-analytical approaches and to report the status of the 3D crack propagation calculations. (authors)

  8. Depletion velocities for atmospheric pollutants oriented To improve the simplified regional dispersion modelling

    International Nuclear Information System (INIS)

    Sanchez Gacita, Madeleine; Turtos Carbonell, Leonor; Rivero Oliva, Jose de Jesus

    2005-01-01

    The present work is aimed to improve externalities assessment using Simplified Methodologies, through the obtaining of depletion velocities for primary pollutants SO 2 , NO X and TSP (Total Suspended Particles) and for sulfate and nitrate aerosols, the secondary pollutants created from the first ones. The main goal proposed was to estimate these values for different cases, in order to have an ensemble of values for the geographic area, among which the most representative could be selected for using it in future studies that appeal to a simplified methodology for the regional dispersion assessment, taking into account the requirements of data, qualified manpower and time for a detailed approach. The results where obtained using detailed studies of the regional dispersion that were conduced for six power facilities, three from Cuba (at the localities of Mariel, Santa Cruz and Tallapiedra) and three from Mexico (at the localities of Tuxpan, Tula and Manzanillo). The depletion velocity for SO 2 was similar for all cases. Results obtained for Tallapiedra, Santa Cruz, Mariel and Manzanillo were similar. For Tula and Tuxpan a high uncertainty was found

  9. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    Science.gov (United States)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  10. Development of a 2-D Simplified P3 FEM Solver for Arbitrary Geometry Applications

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Eun Hyun; Joo, Han Gyu [Seoul National University, Seoul (Korea, Republic of)

    2010-10-15

    In the calculation of power distributions and multiplication factors in a nuclear reactor, the Finite Difference Method (FDM) and the nodal methods are primarily used. These methods are, however, limited to particular geometries and lack general application involving arbitrary geometries. The Finite Element Method (FEM) can be employed for arbitrary geometry application and there are numerous FEM codes to solve the neutron diffusion equation or the Sn transport equation. The diffusion based FEM codes have the drawback of inferior accuracy while the Sn based ones require a considerable computing time. This work here is to seek a compromise between these two by employing the simplified P3 (SP3) method for arbitrary geometry applications. Sufficient accuracy with affordable computing time and resources would be achieved with this choice of approximate transport solution when compared to full FEM based Pn or Sn solutions. For now only 2-D solver is considered

  11. Economic impact of simplified de Gramont regimen in first-line therapy in metastatic colorectal cancer.

    Science.gov (United States)

    Limat, Samuel; Bracco-Nolin, Claire-Hélène; Legat-Fagnoni, Christine; Chaigneau, Loic; Stein, Ulrich; Huchet, Bernard; Pivot, Xavier; Woronoff-Lemsi, Marie-Christine

    2006-06-01

    The cost of chemotherapy has dramatically increased in advanced colorectal cancer patients, and the schedule of fluorouracil administration appears to be a determining factor. This retrospective study compared direct medical costs related to two different de Gramont schedules (standard vs. simplified) given in first-line chemotherapy with oxaliplatin or irinotecan. This cost-minimization analysis was performed from the French Health System perspective. Consecutive unselected patients treated in first-line therapy by LV5FU2 de Gramont with oxaliplatin (Folfox regimen) or with irinotecan (Folfiri regimen) were enrolled. Hospital and outpatient resources related to chemotherapy and adverse events were collected from 1999 to 2004 in 87 patients. Overall cost was reduced in the simplified regimen. The major factor which explained cost saving was the lower need for admissions for chemotherapy. Amount of cost saving depended on the method for assessing hospital stay. In patients treated by the Folfox regimen the per diem and DRG methods found cost savings of Euro 1,997 and Euro 5,982 according to studied schedules; in patients treated by Folfiri regimen cost savings of Euro 4,773 and Euro 7,274 were observed, respectively. In addition, travel costs were also reduced by simplified regimens. The robustness of our results was showed by one-way sensitivity analyses. These findings demonstrate that the simplified de Gramont schedule reduces costs of current first-line chemotherapy in advanced colorectal cancer. Interestingly, our study showed several differences in costs between two costing approaches of hospital stay: average per diem and DRG costs. These results suggested that standard regimen may be considered a profitable strategy from the hospital perspective. The opposition between health system perspective and hospital perspective is worth examining and may affect daily practices. In conclusion, our study shows that the simplified de Gramont schedule in combination with

  12. Simplified Swarm Optimization-Based Function Module Detection in Protein–Protein Interaction Networks

    Directory of Open Access Journals (Sweden)

    Xianghan Zheng

    2017-04-01

    Full Text Available Proteomics research has become one of the most important topics in the field of life science and natural science. At present, research on protein–protein interaction networks (PPIN mainly focuses on detecting protein complexes or function modules. However, existing approaches are either ineffective or incomplete. In this paper, we investigate detection mechanisms of functional modules in PPIN, including open database, existing detection algorithms, and recent solutions. After that, we describe the proposed approach based on the simplified swarm optimization (SSO algorithm and the knowledge of Gene Ontology (GO. The proposed solution implements the SSO algorithm for clustering proteins with similar function, and imports biological gene ontology knowledge for further identifying function complexes and improving detection accuracy. Furthermore, we use four different categories of species datasets for experiment: fruitfly, mouse, scere, and human. The testing and analysis result show that the proposed solution is feasible, efficient, and could achieve a higher accuracy of prediction than existing approaches.

  13. Accurate Simulation of Parametrically Excited Micromirrors via Direct Computation of the Electrostatic Stiffness

    Directory of Open Access Journals (Sweden)

    Attilio Frangi

    2017-04-01

    Full Text Available Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.

  14. Accurate Simulation of Parametrically Excited Micromirrors via Direct Computation of the Electrostatic Stiffness.

    Science.gov (United States)

    Frangi, Attilio; Guerrieri, Andrea; Boni, Nicoló

    2017-04-06

    Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.

  15. Practical modeling approaches for geological storage of carbon dioxide.

    Science.gov (United States)

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  16. How people learn while playing serious games: A computational modelling approach

    NARCIS (Netherlands)

    Westera, Wim

    2017-01-01

    This paper proposes a computational modelling approach for investigating the interplay of learning and playing in serious games. A formal model is introduced that allows for studying the details of playing a serious game under diverse conditions. The dynamics of player action and motivation is based

  17. Significance, progress and prospects for research in simplified cultivation technologies for rice in China.

    Science.gov (United States)

    Huang, M; Ibrahim, Md; Xia, B; Zou, Y

    2011-08-01

    Simplified cultivation technologies for rice have become increasingly attractive in recent years in China because of their social, economical and environmental benefits. To date, several simplified cultivation technologies, such as conventional tillage and seedling throwing (CTST), conventional tillage and direct seeding (CTDS), no-tillage and seedling throwing (NTST), no-tillage and direct seeding (NTDS) and no-tillage and transplanting (NTTP), have been developed in China. Most studies have shown that rice grown under each of these simplified cultivation technologies can produce a grain yield equal to or higher than traditional cultivation (conventional tillage and transplanting). Studies that have described the influences of agronomic practices on yield formation of rice under simplified cultivation have demonstrated that optimizing agronomy practices would increase the efficiencies of simplified cultivation systems. Further research is needed to optimize the management strategies for CTST, CTDS and NTST rice which have developed quickly in recent years, to strengthen basic research for those simplified cultivation technologies that are rarely used at present (such as NTTP and NTDS), to select and breed cultivars suitable for simplified cultivation and to compare the practicability and effectiveness of different simplified cultivation technologies in different rice production regions.

  18. DIRProt: a computational approach for discriminating insecticide resistant proteins from non-resistant proteins.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Banchariya, Anjali; Rao, Atmakuri Ramakrishna

    2017-03-24

    Insecticide resistance is a major challenge for the control program of insect pests in the fields of crop protection, human and animal health etc. Resistance to different insecticides is conferred by the proteins encoded from certain class of genes of the insects. To distinguish the insecticide resistant proteins from non-resistant proteins, no computational tool is available till date. Thus, development of such a computational tool will be helpful in predicting the insecticide resistant proteins, which can be targeted for developing appropriate insecticides. Five different sets of feature viz., amino acid composition (AAC), di-peptide composition (DPC), pseudo amino acid composition (PAAC), composition-transition-distribution (CTD) and auto-correlation function (ACF) were used to map the protein sequences into numeric feature vectors. The encoded numeric vectors were then used as input in support vector machine (SVM) for classification of insecticide resistant and non-resistant proteins. Higher accuracies were obtained under RBF kernel than that of other kernels. Further, accuracies were observed to be higher for DPC feature set as compared to others. The proposed approach achieved an overall accuracy of >90% in discriminating resistant from non-resistant proteins. Further, the two classes of resistant proteins i.e., detoxification-based and target-based were discriminated from non-resistant proteins with >95% accuracy. Besides, >95% accuracy was also observed for discrimination of proteins involved in detoxification- and target-based resistance mechanisms. The proposed approach not only outperformed Blastp, PSI-Blast and Delta-Blast algorithms, but also achieved >92% accuracy while assessed using an independent dataset of 75 insecticide resistant proteins. This paper presents the first computational approach for discriminating the insecticide resistant proteins from non-resistant proteins. Based on the proposed approach, an online prediction server DIRProt has

  19. Simplified response monitoring criteria for multiple myeloma in patients undergoing therapy with novel agents using computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Schabel, Christoph; Horger, Marius; Kum, Sara [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany); Weisel, Katja [Department of Internal Medicine II – Hematology & Oncology, Eberhard-Karls-University Tuebingen, Otfried-Müller-Str. 5, 72076 Tuebingen (Germany); Fritz, Jan [Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, 600 N Wolfe St., Baltimore, MD 21287 (United States); Ioanoviciu, Sorin D. [Department of Internal Medicine, Clinical Municipal Hospital Timisoara, Gheorghe Dima Str. 5, 300079 Timisoara (Romania); Bier, Georg, E-mail: georg.bier@med.uni-tuebingen.de [Department of Neuroradiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany)

    2016-12-15

    Highlights: • A simplified method for response monitoring of multiple myeloma is proposed. • Medullary bone lesions of all limbs were included and analysed. • Diameters of ≥2 medullary bone lesions are sufficient for therapy monitoring. - Abstract: Introduction: Multiple myeloma is a malignant hematological disorder of the mature B-cell lymphocytes originating in the bone marrow. While therapy monitoring is still mainly based on laboratory biomarkers, the additional use of imaging has been advocated due to inaccuracies of serological biomarkers or in a-secretory myelomas. Non-enhanced CT and MRI have similar sensitivities for lesions in yellow marrow-rich bone marrow cavities with a favourable risk and cost-effectiveness profile of CT. Nevertheless, these methods are still limited by frequently high numbers of medullary lesions and its time consumption for proper evaluation. Objective: To establish simplified response criteria by correlating size and CT attenuation changes of medullary multiple myeloma lesions in the appendicular skeleton with the course of lytic bone lesions in the entire skeleton. Furthermore to evaluate these criteria with respect to established hematological myeloma-specific parameters for the prediction of treatment response to bortezomib or lenalidomide. Materials and methods: Non-enhanced reduced-dose whole-body CT examinations of 78 consecutive patients (43 male, 35 female, mean age 63.69 ± 9.2 years) with stage III multiple myeloma were retrospectively re-evaluated. On per patient basis, size and mean CT attenuation of 2–4 representative lesions in the limbs were measured at baseline and at a follow-up after a mean of 8 months. Results were compared with the course of lytical bone lesions as well with that of specific hematological biomarkers. Myeloma response was assessed according to the International Myeloma Working Group (IMWG) uniform response criteria. Testing for correlation between response of medullary lesions (Resp

  20. SMOG@ctbp: simplified deployment of structure-based models in GROMACS.

    Science.gov (United States)

    Noel, Jeffrey K; Whitford, Paul C; Sanbonmatsu, Karissa Y; Onuchic, José N

    2010-07-01

    Molecular dynamics simulations with coarse-grained and/or simplified Hamiltonians are an effective means of capturing the functionally important long-time and large-length scale motions of proteins and RNAs. Structure-based Hamiltonians, simplified models developed from the energy landscape theory of protein folding, have become a standard tool for investigating biomolecular dynamics. SMOG@ctbp is an effort to simplify the use of structure-based models. The purpose of the web server is two fold. First, the web tool simplifies the process of implementing a well-characterized structure-based model on a state-of-the-art, open source, molecular dynamics package, GROMACS. Second, the tutorial-like format helps speed the learning curve of those unfamiliar with molecular dynamics. A web tool user is able to upload any multi-chain biomolecular system consisting of standard RNA, DNA and amino acids in PDB format and receive as output all files necessary to implement the model in GROMACS. Both C(alpha) and all-atom versions of the model are available. SMOG@ctbp resides at http://smog.ucsd.edu.

  1. Integrability and solvability of the simplified two-qubit Rabi model

    International Nuclear Information System (INIS)

    Peng Jie; Ren Zhongzhou; Guo Guangjie; Ju Guoxing

    2012-01-01

    The simplified two-qubit Rabi model is proposed and its analytical solution is presented. There are no level crossings in the spectral graph of the model, which indicates that it is not integrable. The criterion of integrability for the Rabi model proposed by Braak (2011 Phys. Rev. Lett. 107 100401) is also used for the simplified two-qubit Rabi model and the same conclusion, consistent with what the spectral graph shows, can be drawn, which indicates that the criterion remains valid when applied to the two-qubit case. The simplified two-qubit Rabi model is another example of a non-integrable but exactly solvable system except for the generalized Rabi model. (paper)

  2. Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler

    International Nuclear Information System (INIS)

    Zhou Hao; Zheng Ligang; Cen Kefa

    2010-01-01

    The current work presented a computational intelligence approach used for minimizing NO x emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NO x emissions characteristics modeling and NO x emissions optimization. First, an objective function aiming at estimating NO x emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NO x emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NO x emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NO x emissions minimization in actual power plant boilers.

  3. Communicative Approaches To Teaching English in Namibia: The Issue of Transfer of Western Approaches To Developing Countries.

    Science.gov (United States)

    O'Sullivan, Margo C.

    2001-01-01

    Examines Namibia's communicative approach to teaching English speaking and listening skills by exploring the extent to which this approach is appropriate to the Namibian context. Raises the issue of transfer, specifically that communicative approaches are transferable to the Namibian context if they are simplified and adequate prescriptive…

  4. A model predictive speed tracking control approach for autonomous ground vehicles

    Science.gov (United States)

    Zhu, Min; Chen, Huiyan; Xiong, Guangming

    2017-03-01

    This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.

  5. Implementation of a Simplified State Estimator for Wind Turbine Monitoring on an Embedded System

    DEFF Research Database (Denmark)

    Rasmussen, Theis Bo; Yang, Guangya; Nielsen, Arne Hejde

    2017-01-01

    system, including individual DER, is time consuming and numerically challenging. This paper presents the approach and results of implementing a simplified state estimator onto an embedded system for improving DER monitoring. The implemented state estimator is based on numerically robust orthogonal......The transition towards a cyber-physical energy system (CPES) entails an increased dependency on valid data. Simultaneously, an increasing implementation of renewable generation leads to possible control actions at individual distributed energy resources (DERs). A state estimation covering the whole...

  6. Extension of a simplified computer program for analysis of solid-propellant rocket motors

    Science.gov (United States)

    Sforzini, R. H.

    1973-01-01

    A research project to develop a computer program for the preliminary design and performance analysis of solid propellant rocket engines is discussed. The following capabilities are included as computer program options: (1) treatment of wagon wheel cross sectional propellant configurations alone or in combination with circular perforated grains, (2) calculation of ignition transients with the igniter treated as a small rocket engine, (3) representation of spherical circular perforated grain ends as an alternative to the conical end surface approximation used in the original program, and (4) graphical presentation of program results using a digital plotter.

  7. Simplifying massive planar subdivisions

    DEFF Research Database (Denmark)

    Arge, Lars; Truelsen, Jakob; Yang, Jungwoo

    2014-01-01

    We present the first I/O- and practically-efficient algorithm for simplifying a planar subdivision, such that no point is moved more than a given distance εxy and such that neighbor relations between faces (homotopy) are preserved. Under some practically realistic assumptions, our algorithm uses ....... For example, for the contour map simplification problem it is significantly faster than the previous algorithm, while obtaining approximately the same simplification factor. Read More: http://epubs.siam.org/doi/abs/10.1137/1.9781611973198.3...

  8. UTILITY OF SIMPLIFIED LABANOTATION

    Directory of Open Access Journals (Sweden)

    Maria del Pilar Naranjo

    2016-02-01

    Full Text Available After using simplified Labanotation as a didactic tool for some years, the author can conclude that it accomplishes at least three main functions: efficiency of rehearsing time, social recognition and broadening of the choreographic consciousness of the dancer. The doubts of the dancing community about the issue of ‘to write or not to write’ are highly determined by the contexts and their own choreographic evolution, but the utility of Labanotation, as a tool for knowledge, is undeniable.

  9. Simplified computational methods for elastic and elastic-plastic fracture problems

    Science.gov (United States)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  10. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    Science.gov (United States)

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a

  11. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Directory of Open Access Journals (Sweden)

    Abouelhoda Mohamed

    2012-05-01

    Full Text Available Abstract Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub- workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and

  12. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Science.gov (United States)

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system

  13. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  14. Simplified physical approach of the control by eddy current

    International Nuclear Information System (INIS)

    Zergoug, M.

    1986-01-01

    The aim of this study is to calculate the variation of the resistance of coil surrounding a non ferromagnetic cylindrical core in the presence of a flaw. The flaw is a longitudinal notch with an infinite length and a rectangular section. The impedance variation is to be calculated from the geometric repartition of the flow lines in the core. This repartition is a function of the flaw is given by the variation produced by the presence in the non ferromagnetic, conducting core of an emerging axial flaw which length. It is therefore possible to obtain in real time, on a computer screen, the image of a long standard flaw which may produce the observed impedance variation

  15. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  16. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    Science.gov (United States)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  17. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  18. Adaptive tree multigrids and simplified spherical harmonics approximation in deterministic neutral and charged particle transport

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2007-05-01

    A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications. (orig.)

  19. Computer-Aided Approaches for Targeting HIVgp41

    Directory of Open Access Journals (Sweden)

    William J. Allen

    2012-08-01

    Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.

  20. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning.

    Directory of Open Access Journals (Sweden)

    Kristoffer Carl Aberg

    Full Text Available Learning how to gain rewards (approach learning and avoid punishments (avoidance learning is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance learning scored higher on measures of approach (vs. avoidance trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.