Heterogeneous Computing in Economics: A Simplified Approach
Dziubinski, Matt P.; Grassi, Stefano
This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a ...
A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets
Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele
2016-12-01
Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.
A simplified approach for the computation of steady two-phase flow in inverted siphons.
Diogo, A Freire; Oliveira, Maria C
2016-01-15
Hydraulic, sanitary, and sulfide control conditions of inverted siphons, particularly in large wastewater systems, can be substantially improved by continuous air injection in the base of the inclined rising branch. This paper presents a simplified approach that was developed for the two-phase flow of the rising branch using the energy equation for a steady pipe flow, based on the average fluid fraction, observed slippage between phases, and isothermal assumption. As in a conventional siphon design, open channel steady uniform flow is assumed in inlet and outlet chambers, corresponding to the wastewater hydraulic characteristics in the upstream and downstream sewers, and the descending branch operates in steady uniform single-phase pipe flow. The proposed approach is tested and compared with data obtained in an experimental siphon setup with two plastic barrels of different diameters operating separately as in a single-barrel siphon. Although the formulations developed are very simple, the results show a good adjustment for the set of the parameters used and conditions tested and are promising mainly for sanitary siphons with relatively moderate heights of the ascending branch. Copyright © 2015 Elsevier Ltd. All rights reserved.
A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation
Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)
2012-06-15
Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within
Jafarian, Yaser; Ghorbani, Ali; Ahmadi, Omid
2014-09-01
Lateral deformation of liquefiable soil is a cause of much damage during earthquakes, reportedly more than other forms of liquefaction-induced ground failures. Researchers have presented studies in which the liquefied soil is considered as viscous fluid. In this manner, the liquefied soil behaves as non-Newtonian fluid, whose viscosity decreases as the shear strain rate increases. The current study incorporates computational fluid dynamics to propose a simplified dynamic analysis for the liquefaction-induced lateral deformation of earth slopes. The numerical procedure involves a quasi-linear elastic model for small to moderate strains and a Bingham fluid model for large strain states during liquefaction. An iterative procedure is considered to estimate the strain-compatible shear stiffness of soil. The post-liquefaction residual strength of soil is considered as the initial Bingham viscosity. Performance of the numerical procedure is examined by using the results of centrifuge model and shaking table tests together with some field observations of lateral ground deformation. The results demonstrate that the proposed procedure predicts the time history of lateral ground deformation with a reasonable degree of precision.
Cloud computing can simplify HIT infrastructure management.
Glaser, John
2011-08-01
Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.
PSHED: a simplified approach to developing parallel programs
Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.
1992-01-01
This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs
Utilization of handheld computing to simplify compliance
Galvin, G.; Rasmussen, J.; Haines, A.
2008-01-01
Monitoring job site performance and building a continually improving organization is an ongoing challenge for operators of process and power generation facilities. Stakeholders need to accurately capture records of quality and safety compliance, job progress, and operational experiences (OPEX). This paper explores the use of technology-enabled processes as a means for simplifying compliance to quality, safety, administrative, maintenance and operations activities. The discussion will explore a number of emerging technologies and their application to simplifying task execution and process compliance. This paper will further discuss methodologies to further refine processes through trending improvements in compliance and continually optimizing and simplifying through the use of technology. (author)
Quantitative whole body scintigraphy - a simplified approach
Marienhagen, J.; Maenner, P.; Bock, E.; Schoenberger, J.; Eilles, C.
1996-01-01
In this paper we present investigations on a simplified method of quantitative whole body scintigraphy by using a dual head LFOV-gamma camera and a calibration algorithm without the need of additional attenuation or scatter correction. Validation of this approach to the anthropomorphic phantom as well as in patient studies showed a high accuracy concerning quantification of whole body activity (102.8% and 97.72%, resp.), by contrast organ activities were recovered with an error range up to 12%. The described method can be easily performed using commercially available software packages and is recommendable especially for quantitative whole body scintigraphy in a clinical setting. (orig.) [de
A simplified approach to evaluating severe accident source term for PWR
Huang, Gaofeng; Tong, Lili; Cao, Xuewu
2014-01-01
Highlights: • Traditional source term evaluation approaches have been studied. • A simplified approach of source term evaluation for 600 MW PWR is studied. • Five release categories are established. - Abstract: For early design of NPPs, no specific severe accident source term evaluation was considered. Some general source terms have been used for some NPPs. In order to implement a best estimate, a special source term evaluation should be implemented for an NPP. Traditional source term evaluation approaches (mechanism approach and parametric approach) have some difficulties associated with their implementation. The traditional approaches are not consistent with cost-benefit assessment. A simplified approach for evaluating severe accident source term for PWR is studied. For the simplified approach, a simplified containment event tree is established. According to representative cases selection, weighted coefficient evaluation, computation of representative source term cases and weighted computation, five containment release categories are established, including containment bypass, containment isolation failure, containment early failure, containment late failure and intact containment
A simplified approach for simulation of wake meandering
Thomsen, Kenneth; Aagaard Madsen, H.; Larsen, Gunner; Juul Larsen, T.
2006-03-15
This fact-sheet describes a simplified approach for a part of the recently developed dynamic wake model for aeroelastic simulations for wind turbines operating in wake. The part described in this fact-sheet concern the meandering process only, while the other part of the simplified approach the wake deficit profile is outside the scope of the present fact-sheet. Work on simplified models for the wake deficit profile is ongoing. (au)
Computer programs simplify optical system analysis
1965-01-01
The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.
The simplified models approach to constraining supersymmetry
Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)
2015-07-01
The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.
Delayed ripple counter simplifies square-root computation
Cliff, R.
1965-01-01
Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.
The Harmonic Oscillator–A Simplified Approach
L. R. Ganesan
2008-01-01
Full Text Available Among the early problems in quantum chemistry, the one dimensional harmonic oscillator problem is an important one, providing a valuable exercise in the study of quantum mechanical methods. There are several approaches to this problem, the time honoured infinite series method, the ladder operator method etc. A method which is much shorter, mathematically simpler is presented here.
Simplified Approach to Predicting Rough Surface Transition
Boyle, Robert J.; Stripf, Matthias
2009-01-01
Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.
A simplified approach to design for assembly
Moultrie, James; Maier, Anja
2014-01-01
The basic principles of design for assembly (DfA) are well established. This paper presents a short review of the development of DfA approaches before presenting a new tool in which these principles are packaged for use in teams, both in an industrial and an educational context. The fundamental...... consideration in the design of this tool is to encourage wide team participation from across an organisation and is thus physical rather than software-based. This tool builds on the process developed by Appleton whilst at the University of Cambridge. In addition to the traditional analysis of component fitting...
A simplified computational memory model from information processing
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-01-01
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847
A simplified computational memory model from information processing.
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-11-23
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
A simplified approach for the coupling of excitation energy transfer
Shi Bo [Hefei National Laboratory for Physical Science at Microscale, University of Science and Technology of China, Hefei 230026 (China); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Gao Fang, E-mail: gaofang@iim.ac.cn [Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031 (China); State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); Liang Wanzhen [Hefei National Laboratory for Physical Science at Microscale, University of Science and Technology of China, Hefei 230026 (China); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China)
2012-02-06
Highlights: Black-Right-Pointing-Pointer We propose a simple method to calculate the coupling of singlet-to-singlet and triplet-to-triplet energy transfer. Black-Right-Pointing-Pointer Coulomb term are the major contribution to the coupling of singlet-to-singlet energy transfer. Black-Right-Pointing-Pointer Effect from the intermolecular charge-transfer states dorminates in triplet-to-triplet energy transfer. Black-Right-Pointing-Pointer This method can be expanded by including correlated wavefunctions. - Abstract: A simplified approach for computing the electronic coupling of nonradiative excitation-energy transfer is proposed by following Scholes et al.'s construction on the initial and final states [G.D. Scholes, R.D. Harcourt, K.P. Ghiggino, J. Chem. Phys. 102 (1995) 9574]. The simplification is realized through defining a set of orthogonalized localized MOs, which include the polarization effect of the charge densities. The method allows calculating the coupling of both the singlet-to-singlet and triplet-to-triplet energy transfer. Numerical tests are performed for a few of dimers with different intermolecular orientations, and the results demonstrate that Coulomb term are the major contribution to the coupling of singlet-to-singlet energy transfer whereas in the case of triplet-to-triplet energy transfer, the dominant effect is arisen from the intermolecular charge-transfer states. The present application is on the Hartree-Fock level. However, the correlated wavefunctions which are normally expanded in terms of the determinant wavefunctions can be employed in the similar way.
A simplified approach to detect undervoltage tripping of wind generators
Sigrist, Lukas; Rouco, Luis [Universidad Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica
2012-07-01
This paper proposes a simplified but fast approach based on a Norton equivalent of wind generators to detect undervoltage tripping of wind generators. This approach is successfully applied to a real wind farm. The relevant grid code requires the wind farm to withstand a voltage dip of 0% retained voltage. The ability of the wind generators to raise the voltage supplying reactive current and to avoid undervoltage tripping is investigated. The obtained results are also compared with the results obtained from detailed dynamic simulations, which make use of wind generator models complying with the relevant grid code. (orig.)
Donato Hernández Fusilier
2012-02-01
Full Text Available The Multiple Document Interface (MDI is a Microsoft Windows specification that allows managing multiple documents using a single graphic interface application. An MDI application allows opening several documents simultaneously. Only one document is active at a particular time. MDI applications can be deployed using Win32 or Microsoft Foundation Classes (MFC. Programs developed using Win32 are faster than those using MFC. However, Win32applications are difficult to implement and prone to errors. It should be mentioned that, learning how to properly use MFC to deploy MDI applications is not simple, and performance is typically worse than that of Win32 applications. A method to simplify the development of MDI applications using Object-Oriented Programming (OOP is proposed. Subsequently, it is shown that this method generates compact code that is easier to read and maintain than other methods (i.e., MFC. Finally, it is demonstrated that the proposed method allowsthe rapid development of MDI applications without sacrificing application performance.La Interfase para Múltiples Documentos (MDI es una especificación del sistema operativo Microsoft Windows que permite manipular varios documentos usando un sólo programa. Un programa del tipo MDI permite abrir varios documentos simultáneamente. En un instante dado, sólo un documento es activo. Los programas del tipo MDI pueden desarrollarseu sando Win32 o las clases fundamentales de Microsoft (MFC. Los programas desarrollados usando Win32 son más rápidos que los programas que usan MFC. Sin embargo, éstos son difíciles de implementar promoviendo la existencia de errores. Cabe mencionar que el desarrollo de programas del tipo MDI usando MFC no es sencillo, y que su desempeño estípicamente peor que el de un programa del tipo Win32. Se propone un método que drásticamente simplifica el desarrollo de programas del tipo MDI por medio de la Programación Orientada a Objetos (POO. Se demuestra que el m
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
Simplified approach for estimating large early release frequency
Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.
1998-04-01
The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB
Numerical Simulation of Incremental Sheet Forming by Simplified Approach
Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.
2011-01-01
The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.
The Computational Properties of a Simplified Cortical Column Model.
Cain, Nicholas; Iyer, Ramakrishnan; Koch, Christof; Mihalas, Stefan
2016-09-01
The mammalian neocortex has a repetitious, laminar structure and performs functions integral to higher cognitive processes, including sensory perception, memory, and coordinated motor output. What computations does this circuitry subserve that link these unique structural elements to their function? Potjans and Diesmann (2014) parameterized a four-layer, two cell type (i.e. excitatory and inhibitory) model of a cortical column with homogeneous populations and cell type dependent connection probabilities. We implement a version of their model using a displacement integro-partial differential equation (DiPDE) population density model. This approach, exact in the limit of large homogeneous populations, provides a fast numerical method to solve equations describing the full probability density distribution of neuronal membrane potentials. It lends itself to quickly analyzing the mean response properties of population-scale firing rate dynamics. We use this strategy to examine the input-output relationship of the Potjans and Diesmann cortical column model to understand its computational properties. When inputs are constrained to jointly and equally target excitatory and inhibitory neurons, we find a large linear regime where the effect of a multi-layer input signal can be reduced to a linear combination of component signals. One of these, a simple subtractive operation, can act as an error signal passed between hierarchical processing stages.
Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry
Salari, K.; McWherter-Payne, M.
2003-01-01
For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces
Barthelet, B.; Ardillon, E.
1997-01-01
The flaw acceptance rules in nuclear components rely on deterministic criteria supposed to ensure the safe operating of plants. The interest of having a reliable method of evaluating the safety margins and the integrity of components led Electricite de France to launch a study to link safety factors with requested reliability. A simplified analytical probabilistic approach is developed to analyse the failure risk in Fracture Mechanics. Assuming lognormal distributions of the main random variables, it is possible considering a simple Linear Elastic Fracture Mechanics model, to determine the failure probability as a function of mean values and logarithmic standard deviations. The 'design' failure point can be analytically calculated. Partial safety factors on the main variables (stress, crack size, material toughness) are obtained in relation with reliability target values. The approach is generalized to elastic plastic Fracture Mechanics (piping) by fitting J as a power law function of stress, crack size and yield strength. The simplified approach is validated by detailed probabilistic computations with PROBAN computer program. Assuming reasonable coefficients of variations (logarithmic standard deviations), the method helps to calibrate safety factors for different components taking into account reliability target values in normal, emergency and faulted conditions. Statistical data for the mechanical properties of the main basic materials complement the study. The work involves laboratory results and manufacture data. The results of this study are discussed within a working group of the French in service inspection code RSE-M. (authors)
MAT-FLX: a simplified code for computing material balances in fuel cycle
Pierantoni, F.; Piacentini, F.
1983-01-01
This work illustrates a calculation code designed to provide a materials balance for the electro nuclear fuel cycle. The calculation method is simplified but relatively precise and employs a progressive tabulated data approach
Simplified method of computation for fatigue crack growth
Stahlberg, R.
1978-01-01
A procedure is described for drastically reducing the computation time in calculating crack growth for variable-amplitude fatigue loading when the loading sequence is periodic. By the proposed procedure, the crack growth, r, per loading is approximated as a smooth function and its reciprocal is integrated, rather than summing crack growth cycle by cycle. The savings in computation time results since only a few pointwise values of r must be computed to generate an accurate interpolation function for numerical integration. Further time savings can be achieved by selecting the stress intensity coefficient (stress intensity divided by load) as the argument of r. Once r has been obtained as a function of stress intensity coefficient for a given material, environment, and loading sequence, it applies to any configuration of cracked structure. (orig.) [de
A simplified approach to WWER-440 fuel assembly head benchmark
Muehlbauer, P.
2010-01-01
The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation
Simplified approach to dynamic process modelling. Annex 4
Danilytchev, A.; Elistratov, D.; Stogov, V.
2010-01-01
This document presents the OKBM contribution to the analysis of a benchmark of BN-600 reactor hybrid core with simultaneous loading of uranium fuel and MOX within the framework of the international IAEA Co-ordinated Research Project 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. In accordance with Action 12 defined during the second RCM, the simplified transient analysis was carried out on the basis of the reactivity coefficients sets, presented by all CRP participants. Purpose of present comparison is the evaluation of spread in the basic transient parameters in connection with spread in the used reactivity coefficients. A ULOF accident initial stage on the simplified model was calculated by using the SAS4A code
Steam generator transient studies using a simplified two-fluid computer code
Munshi, P.; Bhatnagar, R.; Ram, K.S.
1985-01-01
A simplified two-fluid computer code has been used to simulate reactor-side (or primary-side) transients in a PWR steam generator. The disturbances are modelled as ramp inputs for pressure, internal energy and mass flow-rate for the primary fluid. The CPU time for a transient duration of 4 s is approx. 10 min on a DEC-1090 computer system. The results are thermodynamically consistent and encouraging for further studies. (author)
A simplified model for computing equation of state of argon plasma
Wang Caixia; Tian Yangmeng
2006-01-01
The paper present a simplified new model of computing equation of state and ionization degree of Argon plasma, which based on Thomas-Fermi (TF) statistical model: the authors fitted the numerical results of the ionization potential calculated by Thomas-Fermi statistical model and gained the analytical function of the potential versus the degree of ionization, then calculated the ionization potential and the average degree of ionization for Argon versus temperature and density in local thermal equilibrium case at 10-1000 eV. The results calculated of this simplified model are basically in agreement with several sets of theory data and experimental data. This simplified model can be used to calculation of the equation of state of plasmas mixture and is expected to have a more wide use in the field of EML technology involving the strongly ionized plasmas. (authors)
Simplified approach for quantitative calculations of optical pumping
Atoneche, Fred; Kastberg, Anders
2017-01-01
We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks. (paper)
Simplified approach for quantitative calculations of optical pumping
Atoneche, Fred; Kastberg, Anders
2017-07-01
We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks.
Simplified physical approach of the control by eddy current
Zergoug, M.
1986-01-01
The aim of this study is to calculate the variation of the resistance of coil surrounding a non ferromagnetic cylindrical core in the presence of a flaw. The flaw is a longitudinal notch with an infinite length and a rectangular section. The impedance variation is to be calculated from the geometric repartition of the flow lines in the core. This repartition is a function of the flaw is given by the variation produced by the presence in the non ferromagnetic, conducting core of an emerging axial flaw which length. It is therefore possible to obtain in real time, on a computer screen, the image of a long standard flaw which may produce the observed impedance variation
A simplified approach for the molecular classification of glioblastomas.
Marie Le Mercier
Full Text Available Glioblastoma (GBM is the most common malignant primary brain tumors in adults and exhibit striking aggressiveness. Although GBM constitute a single histological entity, they exhibit considerable variability in biological behavior, resulting in significant differences in terms of prognosis and response to treatment. In an attempt to better understand the biology of GBM, many groups have performed high-scale profiling studies based on gene or protein expression. These studies have revealed the existence of several GBM subtypes. Although there remains to be a clear consensus, two to four major subtypes have been identified. Interestingly, these different subtypes are associated with both differential prognoses and responses to therapy. In the present study, we investigated an alternative immunohistochemistry (IHC-based approach to achieve a molecular classification for GBM. For this purpose, a cohort of 100 surgical GBM samples was retrospectively evaluated by immunohistochemical analysis of EGFR, PDGFRA and p53. The quantitative analysis of these immunostainings allowed us to identify the following two GBM subtypes: the "Classical-like" (CL subtype, characterized by EGFR-positive and p53- and PDGFRA-negative staining and the "Proneural-like" (PNL subtype, characterized by p53- and/or PDGFRA-positive staining. This classification represents an independent prognostic factor in terms of overall survival compared to age, extent of resection and adjuvant treatment, with a significantly longer survival associated with the PNL subtype. Moreover, these two GBM subtypes exhibited different responses to chemotherapy. The addition of temozolomide to conventional radiotherapy significantly improved the survival of patients belonging to the CL subtype, but it did not affect the survival of patients belonging to the PNL subtype. We have thus shown that it is possible to differentiate between different clinically relevant subtypes of GBM by using IHC
Ebersole, M. M.; Lecoq, P. E.
1968-01-01
This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.
McCloud, Peter L.
2010-01-01
Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.
Utilizing of computational tools on the modelling of a simplified problem of neutron shielding
Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico]. E-mails: fsrlessa@gmail.com; gmplatt@iprj.uerj.br; halves@iprj.uerj.br
2007-07-01
In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)
Utilizing of computational tools on the modelling of a simplified problem of neutron shielding
Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes
2007-01-01
In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)
CRUSH1: a simplified computer program for impact analysis of radioactive material transport casks
Ikushima, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-07-01
In drop impact analyses for radioactive transport casks, it has become possible to perform them in detail by using interaction evaluation, computer programs, such as DYNA2D, DYNA3D, PISCES and HONDO. However, the considerable cost and computer time are necessitated to perform analyses by these programs. To meet the above requirements, a simplified computer program CRUSH1 has been developed. The CRUSH1 is a static calculation computer program capable of evaluating the maximum acceleration of cask bodies and the maximum deformation of shock absorbers using an Uniaxial Displacement Method (UDM). The CRUSH1 is a revised version of the CRUSH. Main revisions of the computer program are as follows; (1) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows 3.1 or Windows NT) are available for use of the CRUSH1 and (2) input data set are revised. In the paper, brief illustration of calculation method using UDM is presented. The second section presents comparisons between UDM and the detailed method. The third section provides a use`s guide for CRUSH1. (author)
CRUSH1: a simplified computer program for impact analysis of radioactive material transport casks
Ikushima, Takeshi
1996-07-01
In drop impact analyses for radioactive transport casks, it has become possible to perform them in detail by using interaction evaluation, computer programs, such as DYNA2D, DYNA3D, PISCES and HONDO. However, the considerable cost and computer time are necessitated to perform analyses by these programs. To meet the above requirements, a simplified computer program CRUSH1 has been developed. The CRUSH1 is a static calculation computer program capable of evaluating the maximum acceleration of cask bodies and the maximum deformation of shock absorbers using an Uniaxial Displacement Method (UDM). The CRUSH1 is a revised version of the CRUSH. Main revisions of the computer program are as follows; (1) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows 3.1 or Windows NT) are available for use of the CRUSH1 and (2) input data set are revised. In the paper, brief illustration of calculation method using UDM is presented. The second section presents comparisons between UDM and the detailed method. The third section provides a use's guide for CRUSH1. (author)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach
Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent
2016-01-01
The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-,.., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results. (orig.)
BrainSignals Revisited: Simplifying a Computational Model of Cerebral Physiology.
Matthew Caldwell
Full Text Available Multimodal monitoring of brain state is important both for the investigation of healthy cerebral physiology and to inform clinical decision making in conditions of injury and disease. Near-infrared spectroscopy is an instrument modality that allows non-invasive measurement of several physiological variables of clinical interest, notably haemoglobin oxygenation and the redox state of the metabolic enzyme cytochrome c oxidase. Interpreting such measurements requires the integration of multiple signals from different sources to try to understand the physiological states giving rise to them. We have previously published several computational models to assist with such interpretation. Like many models in the realm of Systems Biology, these are complex and dependent on many parameters that can be difficult or impossible to measure precisely. Taking one such model, BrainSignals, as a starting point, we have developed several variant models in which specific regions of complexity are substituted with much simpler linear approximations. We demonstrate that model behaviour can be maintained whilst achieving a significant reduction in complexity, provided that the linearity assumptions hold. The simplified models have been tested for applicability with simulated data and experimental data from healthy adults undergoing a hypercapnia challenge, but relevance to different physiological and pathophysiological conditions will require specific testing. In conditions where the simplified models are applicable, their greater efficiency has potential to allow their use at the bedside to help interpret clinical data in near real-time.
Chang, P.Y.
1978-02-01
A simplified version of the input instructions for the computer program 'ANSYS' is presented for the non-linear elastoplastic analysis of a ship collision protection barrier structure. All essential information necessary for the grillage model are summarized while eliminating the instructions for other types of the problems. A benchmark example is given for checking the computer program
Kim, Chang Hyun
1997-02-01
A simplified computational scheme for thermal analysis of the LWR spent fuel dry storage and transportation casks has been developed using two-step thermal analysis method incorporating effective thermal conductivity model for the homogenized spent fuel assembly. Although a lot of computer codes and analytical models have been developed for application to the fields of thermal analysis of dry storage and/or transportation casks, some difficulties in its analysis arise from the complexity of the geometry including the rod bundles of spent fuel and the heat transfer phenomena in the cavity of cask. Particularly, if the disk-type structures such as fuel baskets and aluminium heat transfer fins are included, the thermal analysis problems in the cavity are very complex. To overcome these difficulties, cylindrical coordinate system is adopted to calculate the temperature profile of a cylindrical cask body using the multiple cylinder model as the step-1 analysis of the present study. In the step-2 analysis, Cartesian coordinate system is adopted to calculate the temperature distributions of the disk-type structures such as fuel basket and aluminium heat transfer fin using three- dimensional conduction analysis model. The effective thermal conductivity for homogenized spent fuel assembly based on Manteufel and Todreas model is incorporated in step-2 analysis to predict the maximum fuel temperature. The presented two-step computational scheme has been performed using an existing HEATING 7.2 code and the effective thermal conductivity for the homogenized spent fuel assembly has been calculated by additional numerical analyses. Sample analyses of five cases are performed for NAC-STC including normal transportation condition to examine the applicability of the presented simplified computational scheme for thermal analysis of the large LWR spent fuel dry storage and transportation casks and heat transfer characteristics in the cavity of the cask with the disk-type structures
Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography
Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka
1989-01-01
Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)
A programming approach to computability
Kfoury, A J; Arbib, Michael A
1982-01-01
Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...
Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H
2018-03-01
To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Marko Mladineo
2016-12-01
Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.
Computer architecture a quantitative approach
Hennessy, John L
2019-01-01
Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.
Computational approaches to energy materials
Catlow, Richard; Walsh, Aron
2013-01-01
The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process. Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the
A comprehensive approach to dark matter studies: exploration of simplified top-philic models
Arina, Chiara; Backović, Mihailo [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Conte, Eric [Groupe de Recherche de Physique des Hautes Énergies (GRPHE), Université de Haute-Alsace,IUT Colmar, F-68008 Colmar Cedex (France); Fuks, Benjamin [Sorbonne Universités, UPMC University Paris 06, UMR 7589, LPTHE, F-75005, Paris (France); CNRS, UMR 7589, LPTHE, F-75005, Paris (France); Guo, Jun [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,Chinese Academy of Sciences, Beijing 100190 (China); Institut Pluridisciplinaire Hubert Curien/Département Recherches Subatomiques,Université de Strasbourg/CNRS-IN2P3, F-67037 Strasbourg (France); Heisig, Jan [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, D-52056 Aachen (Germany); Hespel, Benoît [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Krämer, Michael [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, D-52056 Aachen (Germany); Maltoni, Fabio; Martini, Antony [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium); Mawatari, Kentarou [Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes,CNRS/IN2P3, 53 Avenue des Martyrs, F-38026 Grenoble (France); Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel andInternational Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Pellen, Mathieu [Universität Würzburg, Institut für Theoretische Physik und Astrophysik,Emil-Hilb-Weg 22, 97074 Würzburg (Germany); Vryonidou, Eleni [Centre for Cosmology, Particle Physics and Phenomenology (CP3),Université catholique de Louvain, Chemin du Cyclotron 2, B-1348 Louvain-la-Neuve (Belgium)
2016-11-21
Studies of dark matter lie at the interface of collider physics, astrophysics and cosmology. Constraining models featuring dark matter candidates entails the capability to provide accurate predictions for large sets of observables and compare them to a wide spectrum of data. We present a framework which, starting from a model Lagrangian, allows one to consistently and systematically make predictions, as well as to confront those predictions with a multitude of experimental results. As an application, we consider a class of simplified dark matter models where a scalar mediator couples only to the top quark and a fermionic dark sector (i.e. the simplified top-philic dark matter model). We study in detail the complementarity of relic density, direct/indirect detection and collider searches in constraining the multi-dimensional model parameter space, and efficiently identify regions where individual approaches to dark matter detection provide the most stringent bounds. In the context of collider studies of dark matter, we point out the complementarity of LHC searches in probing different regions of the model parameter space with final states involving top quarks, photons, jets and/or missing energy. Our study of dark matter production at the LHC goes beyond the tree-level approximation and we show examples of how higher-order corrections to dark matter production processes can affect the interpretation of the experimental results.
A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.
Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M
2015-11-13
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach
A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback
Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.
2015-01-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The
Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander
2017-04-01
In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that
A simplified modelling approach for quantifying tillage effects on soil carbon stocks
Chatskikh, Dmitri; Hansen, Søren; Olesen, Jørgen E.
2009-01-01
Soil tillage has been shown to affect long-term changes in soil organic carbon (SOC) content in a number of field experiments. This paper presents a simplified approach for including effects of tillage in models of soil C turnover in the tilled-soil layer. We used an existing soil organic matter...... (SOM) model (CN-SIM) with standard SOC data for a homogeneous tilled layer from four long-term field experiments with conventionally tilled (CT) and no-till (NT) treatments. The SOM model was tested on data from long-term (>10 years) field trials differing in climatic conditions, soil properties......, residue management and crop rotations in Australia, Brazil, the USA and Switzerland. The C input for the treatments was estimated using data on crop rotation and residue management. The SOM model was applied for both CT and NT trials without recalibration, but incorporated a 'tillage factor' (TF) to scale...
The large break LOCA evaluation method with the simplified statistic approach
Kamata, Shinya; Kubo, Kazuo
2004-01-01
USNRC published the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology to large break LOCA which supported the revised rule for Emergency Core Cooling System performance in 1989. In USNRC regulatory guide 1.157, it is required that the peak cladding temperature (PCT) cannot exceed 2200deg F with high probability 95th percentile. In recent years, overseas countries have developed statistical methodology and best estimate code with the model which can provide more realistic simulation for the phenomena based on the CSAU evaluation methodology. In order to calculate PCT probability distribution by Monte Carlo trials, there are approaches such as the response surface technique using polynomials, the order statistics method, etc. For the purpose of performing rational statistic analysis, Mitsubishi Heavy Industries, LTD (MHI) tried to develop the statistic LOCA method using the best estimate LOCA code MCOBRA/TRAC and the simplified code HOTSPOT. HOTSPOT is a Monte Carlo heat conduction solver to evaluate the uncertainties of the significant fuel parameters at the PCT positions of the hot rod. The direct uncertainty sensitivity studies can be performed without the response surface because the Monte Carlo simulation for key parameters can be performed in short time using HOTSPOT. With regard to the parameter uncertainties, MHI established the treatment that the bounding conditions are given for LOCA boundary and plant initial conditions, the Monte Carlo simulation using HOTSPOT is applied to the significant fuel parameters. The paper describes the large break LOCA evaluation method with the simplified statistic approach and the results of the application of the method to the representative four-loop nuclear power plant. (author)
Tongkun Lan
2018-01-01
Full Text Available AC (alternating current system backup protection setting calculation is an important basis for ensuring the safe operation of power grids. With the increasing integration of modular multilevel converter based high voltage direct current (MMC-HVDC into power grids, it has been a big challenge for the AC system backup protection setting calculation, as the MMC-HVDC lacks the fault self-clearance capability under pole-to-pole faults. This paper focused on the pole-to-pole faults analysis for the AC system backup protection setting calculation. The principles of pole-to-pole faults analysis were discussed first according to the standard of the AC system protection setting calculation. Then, the influence of fault resistance on the fault process was investigated. A simplified analytic approach of pole-to-pole faults in MMC-HVDC for the AC system backup protection setting calculation was proposed. In the proposed approach, the derived expressions of fundamental frequency current are applicable under arbitrary fault resistance. The accuracy of the proposed approach was demonstrated by PSCAD/EMTDC (Power Systems Computer-Aided Design/Electromagnetic Transients including DC simulations.
A simplified approach for slope stability analysis of uncontrolled waste dumps.
Turer, Dilek; Turer, Ahmet
2011-02-01
Slope stability analysis of municipal solid waste has always been problematic because of the heterogeneous nature of the waste materials. The requirement for large testing equipment in order to obtain representative samples has identified the need for simplified approaches to obtain the unit weight and shear strength parameters of the waste. In the present study, two of the most recently published approaches for determining the unit weight and shear strength parameters of the waste have been incorporated into a slope stability analysis using the Bishop method to prepare slope stability charts. The slope stability charts were prepared for uncontrolled waste dumps having no liner and leachate collection systems with pore pressure ratios of 0, 0.1, 0.2, 0.3, 0.4 and 0.5, considering the most critical slip surface passing through the toe of the slope. As the proposed slope stability charts were prepared by considering the change in unit weight as a function of height, they reflect field conditions better than accepting a constant unit weight approach in the stability analysis. They also streamline the selection of slope or height as a function of the desired factor of safety.
Zhou, Yanting; Gao, Jing; Zhu, Hongwen; Xu, Jingjing; He, Han; Gu, Lei; Wang, Hui; Chen, Jie; Ma, Danjun; Zhou, Hu; Zheng, Jing
2018-02-20
Membrane proteins may act as transporters, receptors, enzymes, and adhesion-anchors, accounting for nearly 70% of pharmaceutical drug targets. Difficulties in efficient enrichment, extraction, and solubilization still exist because of their relatively low abundance and poor solubility. A simplified membrane protein extraction approach with advantages of user-friendly sample processing procedures, good repeatability and significant effectiveness was developed in the current research for enhancing enrichment and identification of membrane proteins. This approach combining centrifugation and detergent along with LC-MS/MS successfully identified higher proportion of membrane proteins, integral proteins and transmembrane proteins in membrane fraction (76.6%, 48.1%, and 40.6%) than in total cell lysate (41.6%, 16.4%, and 13.5%), respectively. Moreover, our method tended to capture membrane proteins with high degree of hydrophobicity and number of transmembrane domains as 486 out of 2106 (23.0%) had GRAVY > 0 in membrane fraction, 488 out of 2106 (23.1%) had TMs ≥ 2. It also provided for improved identification of membrane proteins as more than 60.6% of the commonly identified membrane proteins in two cell samples were better identified in membrane fraction with higher sequence coverage. Data are available via ProteomeXchange with identifier PXD008456.
Dimitrov BD
2015-04-01
Full Text Available Borislav D Dimitrov,1,2 Nicola Motterlini,2,† Tom Fahey2 1Academic Unit of Primary Care and Population Sciences, University of Southampton, Southampton, United Kingdom; 2HRB Centre for Primary Care Research, Department of General Medicine, Division of Population Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland †Nicola Motterlini passed away on November 11, 2012 Objective: Estimating calibration performance of clinical prediction rules (CPRs in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a ABCD2 rule for prediction of 7 day stroke; and b CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”. As confirmation, a logistic regression model (with derivation study coefficients was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs, 95% confidence intervals (CIs, and indexes of heterogeneity (I2 on forest plots (fixed and random effects models, with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results: Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points, indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82, however, calibration in some studies was low. In such cases with miscalibration, the under
Radiodine treatment of hyperthyroidism with a simplified dosimetric approach. Clinical results
Giovanella, L.; De Palma, D.; Ceriani, L.; Garancini, S.; Vanoli, P.; Tordiglione, M.; Tarolo, G. L.
2000-01-01
In this article is evaluated the clinical and effectiveness of a simplified dosimetric approach to the iodine-131 treatment of hyperthyroidism due to Graves' disease or uninodular and multinodular toxic goiter. 189 patients with biochemically confirmed hyperthyroidism and performed thyroid ultrasonography and scintigraphy obtaining the diagnosis of Graves' disease in 43 patients, uninodular toxic goiter in 57 patients and multinodular toxic goiter in 89 patients were enrolled in order to be examined. It was found in 28 patients cold thyroid nodules and performed fine-needle aspiration with negative cytology for thyroid malignancy in all cases. Antithyroid drugs were stopped 5 days till radioiodine administration and, if necessary, restored 15 days after the treatment. Radioiodine uptake test was performed in all patients and therapeutic activity calculated to obtain a minimal activity of 185 MBq in the thyroid 24 hours after administration. The minimal activity was adjusted based on clinical, biochemical and imaging data to obtain a maximal activity of 370 MBq after 24 hours. Biochemical and clinical tests were scheduled at 3 and 12 months posttreatment and thyroxine treatment was started when hypothyroidism occurred. In Graves' disease patients a mean activity of 370 MBq (distribution 259-555 MBq) was administered. Three months after treatment and at least 15 days after methimazole discontinuation 32 of 43 (74%) patients were hypothyroid , 5 of 43 (11%) euthyroid and 6 of 43 (15%) hyperthyroid. Three of the latter were immediately submitted to a new radioiodine administration while 32 hypothyroid patients received thyroxine treatment. One year after the radioiodine treatment no patient had hyperthyroidism; 38 of 43 (89%) were on a replacement treatment while 5 (11%) remained euthyroid. In uni-and multinodular toxic goiter a mean activity of 444 MBq (distribution 259-555 MBq) was administered. Three months posttreatment 134 of 146 (92%) patients were euthyroid and
MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACH
Omar AlSheikSalem
2016-01-01
In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating healthcare applications and services is one of the vast data approaches that can be adapted to mobile cloud computing. This work proposes a framework of a global healthcare computing based combining both mobile computing and cloud computing. This approach leads to integrate all of ...
On matrix-model approach to simplified Khovanov-Rozansky calculus
Morozov, A.; Morozov, And.; Popolitov, A.
2015-10-01
Wilson-loop averages in Chern-Simons theory (HOMFLY polynomials) can be evaluated in different ways - the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension) of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q = 1 the problem is reformulated in terms of fat (ribbon) graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q ≠ 1 most of these relations are broken (i.e. deformed in a still uncontrollable way), and only few are protected by Reidemeister invariance of Chern-Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections - far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.
On matrix-model approach to simplified Khovanov–Rozansky calculus
A. Morozov
2015-10-01
Full Text Available Wilson-loop averages in Chern–Simons theory (HOMFLY polynomials can be evaluated in different ways – the most difficult, but most interesting of them is the hypercube calculus, the only one applicable to virtual knots and used also for categorification (higher-dimensional extension of the theory. We continue the study of quantum dimensions, associated with hypercube vertices, in the drastically simplified version of this approach to knot polynomials. At q=1 the problem is reformulated in terms of fat (ribbon graphs, where Seifert cycles play the role of vertices. Ward identities in associated matrix model provide a set of recursions between classical dimensions. For q≠1 most of these relations are broken (i.e. deformed in a still uncontrollable way, and only few are protected by Reidemeister invariance of Chern–Simons theory. Still they are helpful for systematic evaluation of entire series of quantum dimensions, including negative ones, which are relevant for virtual link diagrams. To illustrate the effectiveness of developed formalism we derive explicit expressions for the 2-cabled HOMFLY of virtual trefoil and virtual 3.2 knot, which involve respectively 12 and 14 intersections – far beyond any dreams with alternative methods. As a more conceptual application, we describe a relation between the genus of fat graph and Turaev genus of original link diagram, which is currently the most effective tool for the search of thin knots.
Lee, Won-Kang; Bae, Jung-Hee; Hu, Kyung-Seok; Kato, Takafumi; Kim, Seong-Taek
2017-03-01
The objective of this study was to simplify the anatomically safe and reproducible approach for BoNT injection and to generate a detailed topographic map of the important anatomical structures of the temporal region by dividing the temporalis into nine equally sized compartments. Nineteen sides of temporalis muscle were used. The topographies of the superficial temporal artery, middle temporal vein, temporalis tendon, and the temporalis muscle were evaluated. Also evaluated was the postural relations among the foregoing anatomical structures in the temporalis muscle, pivoted upon a total of nine compartments. The temporalis above the zygomatic arch exhibited an oblique quadrangular shape with rounded upper right and left corners. The distance between the anterior and posterior margins of the temporalis muscle was equal to the width of the temporalis rectangle, and the distance between the reference line and the superior temporalis margin was equal to its height. The mean ratio of width to height was 5:4. We recommend compartments Am, Mu, and Pm (coordinates of the rectangular outline) as areas in the temporal region for BoNT injection, because using these sites will avoid large blood vessels and tendons, thus improving the safety and reproducibility of the injection.
Computer Networks A Systems Approach
Peterson, Larry L
2011-01-01
This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur
Сlassification of methods of production of computer forensic by usage approach of graph theory
Anna Ravilyevna Smolina
2016-06-01
Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Сlassification of methods of production of computer forensic by usage approach of graph theory
Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov
2016-01-01
Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Computational approach to Riemann surfaces
Klein, Christian
2011-01-01
This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...
Fuzzy multiple linear regression: A computational approach
Juang, C. H.; Huang, X. H.; Fleming, J. W.
1992-01-01
This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.
Simplified approaches for the numerical simulation of welding processes with filler material
Carmignani, B.; Toselli, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy)
2001-07-01
Due to the very high computation times, required by the methodologies pointed out during the studies carried out at ENEA-Bologna concerning the numerical simulations of welds with filler material of steel pieces of high thickness (studies presented also at the 12. and 13. International ABAQUS Users' Conferences), new simplified methodologies have been proposed and applied to an experimental model of significant dimensions. (These studies are of interest in the nuclear field for the construction of the toroidal field coil case, TFCC, for the international thermonuclear experimental reactor, ITER machine). In this paper these new methodologies are presented together the obtained results, which have been compared, successfully, with the ones obtained by the use of the previous numerical methodologies considered and also with the corresponding experimental measures. These new calculation techniques are in course of application for the simulation of welds of pieces constituting a real component of ITER TF coil case. [Italian] A causa dei tempi di calcolo molto elevati richiesti dalle metodologie individuate e messe a punto durante gli studi eseguiti in ENEA-Bologna riguardanti le simulazioni numeriche di saldature, con apporto di materiale, di pezzi di acciaio di grande spessore (studi presentati anche alle precedenti Conferenze Utenti ABAQUS, 12{sup 0} e 13{sup 0} ABAQUS Users' Conferences), sono state cercate e proposte nuove metodologie semplificate, che sono state poi applicate ad un modello sperimentale di dimensioni significative. (Si ricorda che questi studi sono di interesse nel campo nucleare per la costruzione delle casse per contenere le bobine che daranno luogo al campo magnetico della macchina ITER, reattore internazione sperimentale termonucleare). Nel lavoro qui presentato sono descritte queste nuove metodologie e sono riportati i risultati ottenuti dalla loro applicazione unitamente ai confronti (abbastanza soddisfacenti) con i risultati
Simplified computational simulation of liquid metal behaviour in turbulent flow with heat transfer
Costa, E.B. da.
1992-09-01
The present work selected the available bibliography equations and empirical relationships to the development of a computer code to obtain the turbulent velocity and temperature profiles in liquid metal tube flow with heat generation. The computer code is applied to a standard problem and the results are considered satisfactory, at least from the viewpoint of qualitative behaviour. (author). 50 refs, 21 figs, 3 tabs
Sforzini, R. H.
1972-01-01
An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.
Simplifying the parallelization of scientific codes by a function-centric approach in Python
Nilsen, Jon K; Cai Xing; Langtangen, Hans Petter; Hoeyland, Bjoern
2010-01-01
The purpose of this paper is to show how existing scientific software can be parallelized using a separate thin layer of Python code where all parallelization-specific tasks are implemented. We provide specific examples of such a Python code layer, which can act as templates for parallelizing a wide set of serial scientific codes. The use of Python for parallelization is motivated by the fact that the language is well suited for reusing existing serial codes programmed in other languages. The extreme flexibility of Python with regard to handling functions makes it very easy to wrap up decomposed computational tasks of a serial scientific application as Python functions. Many parallelization-specific components can be implemented as generic Python functions, which may take as input those wrapped functions that perform concrete computational tasks. The overall programming effort needed by this parallelization approach is limited, and the resulting parallel Python scripts have a compact and clean structure. The usefulness of the parallelization approach is exemplified by three different classes of application in natural and social sciences.
Savoca, Mark E.; Senay, Gabriel B.; Maupin, Molly A.; Kenny, Joan F.; Perry, Charles A.
2013-01-01
Remote-sensing technology and surface-energy-balance methods can provide accurate and repeatable estimates of actual evapotranspiration (ETa) when used in combination with local weather datasets over irrigated lands. Estimates of ETa may be used to provide a consistent, accurate, and efficient approach for estimating regional water withdrawals for irrigation and associated consumptive use (CU), especially in arid cropland areas that require supplemental water due to insufficient natural supplies from rainfall, soil moisture, or groundwater. ETa in these areas is considered equivalent to CU, and represents the part of applied irrigation water that is evaporated and/or transpired, and is not available for immediate reuse. A recent U.S. Geological Survey study demonstrated the application of the remote-sensing-based Simplified Surface Energy Balance (SSEB) model to estimate 10-year average ETa at 1-kilometer resolution on national and regional scales, and compared those ETa values to the U.S. Geological Survey’s National Water-Use Information Program’s 1995 county estimates of CU. The operational version of the operational SSEB (SSEBop) method is now used to construct monthly, county-level ETa maps of the conterminous United States for the years 2000, 2005, and 2010. The performance of the SSEBop was evaluated using eddy covariance flux tower datasets compiled from 2005 datasets, and the results showed a strong linear relationship in different land cover types across diverse ecosystems in the conterminous United States (correlation coefficient [r] ranging from 0.75 to 0.95). For example, r for woody savannas (0.75), grassland (0.75), forest (0.82), cropland (0.84), shrub land (0.89), and urban (0.95). A comparison of the remote-sensing SSEBop method for estimating ETa and the Hamon temperature method for estimating potential ET (ETp) also was conducted, using regressions of all available county averages of ETa for 2005 and 2010, and yielded correlations of r = 0
Extension of a simplified computer program for analysis of solid-propellant rocket motors
Sforzini, R. H.
1973-01-01
A research project to develop a computer program for the preliminary design and performance analysis of solid propellant rocket engines is discussed. The following capabilities are included as computer program options: (1) treatment of wagon wheel cross sectional propellant configurations alone or in combination with circular perforated grains, (2) calculation of ignition transients with the igniter treated as a small rocket engine, (3) representation of spherical circular perforated grain ends as an alternative to the conical end surface approximation used in the original program, and (4) graphical presentation of program results using a digital plotter.
Simulation of the space debris environment in LEO using a simplified approach
Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico
2017-01-01
Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC
Computer Architecture A Quantitative Approach
Hennessy, John L
2011-01-01
The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im
Plansky, L.E.; Seitz, R.R.
1994-02-01
This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code
A simplified computing method of pile group to seismic loads using thin layer element
Masao, T.; Hama, I.
1995-01-01
In the calculation of pile group, it is said that the results of response by thin layer method give the correct solution with the isotropic and homogeneous soil material in each layer, on the other hand this procedure spends huge computing time. Dynamic stiffness matrix of thin layer method is obtained from inversion of flexibility matrix between pile-i and pile-j. This flexibility matrix is full matrix and its size increase in proportion to the number of piles and thin layers. The greater part of run time is taken into the inversion of flexibility matrix against point loading. We propose the method of decreasing the run time for computing by reducing to banded matrix of flexibility matrix. (author)
FISPRO: a simplified computer program for general fission product formation and decay calculations
Jiacoletti, R.J.; Bailey, P.G.
1979-08-01
This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations
Simplified computational methods for elastic and elastic-plastic fracture problems
Atluri, Satya N.
1992-01-01
An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.
A simplified approach for evaluating secondary stresses in elevated temperature design
Becht, C.
1983-01-01
Control of secondary stresses is important for long-term reliability of components, particularly at elevated temperatures where substantial creep damage can occur and result in cracking. When secondary stresses are considered in the design of elevated temperature components, these are often addressed by the criteria contained in Nuclear Code Case N-47 for use with elastic or inelastic analysis. The elastic rules are very conservative as they bound a large range of complex phenomena; because of this conservatism, only components in relatively mild services can be designed in accordance with these rules. The inelastic rules, although more accurate, require complex and costly nonlinear analysis. Elevated temperature shakedown is a recognized phenomenon that has been considered in developing Code rules and simplified methods. This paper develops and examines the implications of using a criteria which specifically limits stresses to the shakedown regime. Creep, fatigue, and strain accumulation are considered. The effect of elastic follow-up on the conservatism of the criteria is quantified by means of a simplified method. The level of conservatism is found to fall between the elastic and inelastic rules of N-47 and, in fact, the incentives for performing complex inelastic analyses appear to be low except in the low cycle regime. The criteria has immediate applicability to non-code components such as vessel internals in the chemical, petroleum, and synfuels industry. It is suggested that such a criteria be considered in future code rule development
A Simplified Approach to Risk Assessment Based on System Dynamics: An Industrial Case Study.
Garbolino, Emmanuel; Chery, Jean-Pierre; Guarnieri, Franck
2016-01-01
Seveso plants are complex sociotechnical systems, which makes it appropriate to support any risk assessment with a model of the system. However, more often than not, this step is only partially addressed, simplified, or avoided in safety reports. At the same time, investigations have shown that the complexity of industrial systems is frequently a factor in accidents, due to interactions between their technical, human, and organizational dimensions. In order to handle both this complexity and changes in the system over time, this article proposes an original and simplified qualitative risk evaluation method based on the system dynamics theory developed by Forrester in the early 1960s. The methodology supports the development of a dynamic risk assessment framework dedicated to industrial activities. It consists of 10 complementary steps grouped into two main activities: system dynamics modeling of the sociotechnical system and risk analysis. This system dynamics risk analysis is applied to a case study of a chemical plant and provides a way to assess the technological and organizational components of safety. © 2016 Society for Risk Analysis.
What is computation : An epistemic approach
Wiedermann, Jiří; van Leeuwen, Jan
2015-01-01
Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too
Integrative approaches to computational biomedicine
Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco
2013-01-01
The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.
Richards, Elizabeth H.; Schindel, Kay (City of Madison, WI); Bosiljevac, Tom; Dwyer, Stephen F.; Lindau, William (Lindau Companies, Inc., Hudson, WI); Harper, Alan (City of Madison, WI)
2011-12-01
Structural Considerations for Solar Installers provides a comprehensive outline of structural considerations associated with simplified solar installations and recommends a set of best practices installers can follow when assessing such considerations. Information in the manual comes from engineering and solar experts as well as case studies. The objectives of the manual are to ensure safety and structural durability for rooftop solar installations and to potentially accelerate the permitting process by identifying and remedying structural issues prior to installation. The purpose of this document is to provide tools and guidelines for installers to help ensure that residential photovoltaic (PV) power systems are properly specified and installed with respect to the continuing structural integrity of the building.
Brown, Tulanda
2003-01-01
At the Fernald Closure Project (FCP) near Cincinnati, Ohio, environmental restoration activities are supported by Documented Safety Analyses (DSAs) that combine the required project-specific Health and Safety Plans, Safety Basis Requirements (SBRs), and Process Requirements (PRs) into single Integrated Health and Safety Plans (I-HASPs). By isolating any remediation activities that deal with Enriched Restricted Materials, the SBRs and PRs assure that the hazard categories of former nuclear facilities undergoing remediation remain less than Nuclear. These integrated DSAs employ Integrated Safety Management methodology in support of simplified restoration and remediation activities that, so far, have resulted in the decontamination and demolition (D and D) of over 150 structures, including six major nuclear production plants. This paper presents the FCP method for maintaining safety basis documentation, using the D and D I-HASP as an example
Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study
Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki; Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun; Kosaka, Naoki; Kon, Yujiro
2011-01-01
To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)
Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study
Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki [Hokkaido University Hospital, Department of Radiology, Sapporo City (Japan); Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun [Hokkaido Medical Center for Rheumatic Diseases, Sapporo City, Hokkaido (Japan); Kosaka, Naoki [Tokeidai Memorial Hospital, Sapporo City, Hokkaido (Japan); Kon, Yujiro [St. Thomas' Hospital, Lupus Research Unit, The Rayne Institute, London (United Kingdom)
2011-01-15
To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)
Infinitesimal symmetries: a computational approach
Kersten, P.H.M.
1985-01-01
This thesis is concerned with computational aspects in the determination of infinitesimal symmetries and Lie-Baecklund transformations of differential equations. Moreover some problems are calculated explicitly. A brief introduction to some concepts in the theory of symmetries and Lie-Baecklund transformations, relevant for this thesis, are given. The mathematical formalism is shortly reviewed. The jet bundle formulation is chosen, in which, by its algebraic nature, objects can be described very precisely. Consequently it is appropriate for implementation. A number of procedures are discussed, which enable to carry through computations with the help of a computer. These computations are very extensive in practice. The Lie algebras of infinitesimal symmetries of a number of differential equations in Mathematical Physics are established and some of their applications are discussed, i.e., Maxwell equations, nonlinear diffusion equation, nonlinear Schroedinger equation, nonlinear Dirac equations and self dual SU(2) Yang-Mills equations. Lie-Baecklund transformations of Burgers' equation, Classical Boussinesq equation and the Massive Thirring Model are determined. Furthermore, nonlocal Lie-Baecklund transformations of the last equation are derived. (orig.)
Computational approach in zeolite science
Pidko, E.A.; Santen, van R.A.; Chester, A.W.; Derouane, E.G.
2009-01-01
This chapter presents an overview of different computational methods and their application to various fields of zeolite chemistry. We will discuss static lattice methods based on interatomic potentials to predict zeolite structures and topologies, Monte Carlo simulations for the investigation of
Zargar, Homayoun; Krishnan, Jayram; Autorino, Riccardo; Akca, Oktay; Brandao, Luis Felipe; Laydner, Humberto; Samarasekera, Dinesh; Ko, Oliver; Haber, Georges-Pascal; Kaouk, Jihad H; Stein, Robert J
2014-10-01
Robotic technology is increasingly adopted in urologic surgery and a variety of techniques has been described for minimally invasive treatment of upper tract urothelial cancer (UTUC). To describe a simplified surgical technique of robot-assisted nephroureterectomy (RANU) and to report our single-center surgical outcomes. Patients with history of UTUC treated with this modality between April 2010 and August 2013 were included in the analysis. Institutional review board approval was obtained. Informed consent was signed by all patients. A simplified single-step RANU not requiring repositioning or robot redocking. Lymph node dissection was performed selectively. Descriptive analysis of patients' characteristics, perioperative outcomes, histopathology, and short-term follow-up data was performed. The analysis included 31 patients (mean age: 72.4±10.6 yr; mean body mass index: 26.6±5.1kg/m(2)). Twenty-six of 30 tumors (86%) were high grade. Mean tumor size was 3.1±1.8cm. Of the 31 patients, 13 (42%) had pT3 stage disease. One periureteric positive margin was noted in a patient with bulky T3 disease. The mean number of lymph nodes removed was 9.4 (standard deviation: 5.6; range: 3-21). Two of 14 patients (14%) had positive lymph nodes on final histology. No patients required a blood transfusion. Six patients experienced complications postoperatively, with only one being a high grade (Clavien 3b) complication. Median hospital stay was 5 d. Within the follow-up period, seven patients experienced bladder recurrences and four patients developed metastatic disease. Our RANU technique eliminates the need for patient repositioning or robot redocking. This technique can be safely reproduced, with surgical outcomes comparable to other established techniques. We describe a surgical technique using the da Vinci robot for a minimally invasive treatment of patients presenting with upper tract urothelial cancer. This technique can be safely implemented with good surgical outcomes
Computer Architecture A Quantitative Approach
Hennessy, John L
2007-01-01
The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis
Ruivo, C.R.; Vaz, D.C.
2015-01-01
Highlights: • The transient thermal behaviour of external multilayer walls of buildings is studied. • Reference results for four representative walls, obtained with a numerical model, are provided. • Shortcomings of approaches based on the Mackey-and-Wright method are identified. • Handling full-feature excitations with Fourier series decomposition improves accuracy. • A simpler, yet accurate, promising novel approach to predict heat gain is proposed. - Abstract: Nowadays, simulation tools are available for calculating the thermal loads of multiple rooms of buildings, for given inputs. However, due to inaccuracies or uncertainties in some of the input data (e.g., thermal properties, air infiltrations flow rates, building occupancy), the evaluated thermal load may represent no more than just an estimate of the actual thermal load of the spaces. Accordingly, in certain practical situations, simplified methods may offer a more reasonable trade-off between effort and results accuracy than advanced software. Hence, despite the advances in computing power over the last decades, simplified methods for the evaluation of thermal loads are still of great interest nowadays, for both the practicing engineer and the graduating student, since these can be readily implemented or developed in common computational-tools, like a spreadsheet. The method of Mackey and Wright (M&W) is a simplified method that upon values of the decrement factor and time lag of a wall (or roof) estimates the instantaneous rate of heat transfer through its indoor surface. It assumes cyclic behaviour and shows good accuracy when the excitation and response have matching shapes, but it involves non negligible error otherwise, for example, in the case of walls of high thermal inertia. The aim of this study is to develop a simplified procedure that considerably improves the accuracy of the M&W method, particularly for excitations that noticeably depart from the sinusoidal shape, while not
Learning and geometry computational approaches
Smith, Carl
1996-01-01
The field of computational learning theory arose out of the desire to for mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...
Kastler, Adrian; Onana, Yannick; Comte, Alexandre; Attyé, Arnaud; Lajoie, Jean-Louis; Kastler, Bruno
2015-08-01
To evaluate the efficacy of a simplified CT-guided greater occipital nerve (GON) infiltration approach in the management of occipital neuralgia (ON). Local IRB approval was obtained and written informed consent was waived. Thirty three patients suffering from severe refractory ON who underwent a total of 37 CT-guided GON infiltrations were included between 2012 and 2014. GON infiltration was performed at the first bend of the GON, between the inferior obliqus capitis and semispinalis capitis muscles with local anaesthetics and cortivazol. Pain was evaluated via VAS scores. Clinical success was defined by pain relief greater than or equal to 50 % lasting for at least 3 months. The pre-procedure mean pain score was 8/10. Patients suffered from left GON neuralgia in 13 cases, right GON neuralgia in 16 cases and bilateral GON neuralgia in 4 cases. The clinical success rate was 86 %. In case of clinical success, the mean pain relief duration following the procedure was 9.16 months. Simplified CT-guided infiltration appears to be effective in managing refractory ON. With this technique, infiltration of the GON appears to be faster, technically easier and, therefore, safer compared with other previously described techniques. • Occipital neuralgia is a very painful and debilitating condition • GON infiltrations have been successful in the treatment of occipital neuralgia • This simplified technique presents a high efficacy rate with long-lasting pain relief • This infiltration technique does not require contrast media injection for pre-planning • GON infiltration at the first bend appears easier and safer.
Quantum Computing: a Quantum Group Approach
Wang, Zhenghan
2013-01-01
There is compelling theoretical evidence that quantum physics will change the face of information science. Exciting progress has been made during the last two decades towards the building of a large scale quantum computer. A quantum group approach stands out as a promising route to this holy grail, and provides hope that we may have quantum computers in our future.
Cloud computing methods and practical approaches
Mahmood, Zaigham
2013-01-01
This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
A simplified approach to control system specification and design using domain modelling and mapping
Ludgate, G.A.
1992-01-01
Recent developments in the field of accelerator-domain and computer-domain modelling have led to a better understanding of the 'art' of control system specification and design. It now appears possible to 'compile' a control system specification to produce the architectural design. The information required by the 'compiler' is discussed and one hardware optimization algorithm presented. The desired characteristics of the hardware and software components of a distributed control system architecture are discussed and the shortcomings of some commercial products. (author)
Cognitive Approaches for Medicine in Cloud Computing.
Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia
2018-03-03
This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.
VEGF-A mRNA measurement in meningiomas using a new simplified approach
Dyrbye, Henrik; Nassehi, Damoun; Sørensen, Lars Peter
2016-01-01
of mRNA-concentration, they were expected to be comparable. The aim of the present study was to compare Lumistar to the traditional RT-qPCR approach in a routine laboratory setting, where there is emphasis on rapid analysis response. Meningioma (nÂ =Â 10) and control brain tissue (nÂ =Â 5) samples were...
A simplified approach for the simulation of water-in-oil emulsions in gravity separators
Lakehal, D.; Narayanan, C. [ASCOMP GmbH, Zurich (Switzerland); Vilagines, R.; Akhras, A.R. [Saudi Aramco, Dhahran (Saudi Arabia). Research and Development Center
2009-07-01
A new method of simulating 3-phase flow separation processes in a crude oil product was presented. The aim of the study was to increase the liquid capacity of the vessels and develop methods of testing variable flow entry procedures. The simulated system was based on gravity separation. Oil well streams were injected into large tanks where gas, oil and water were separated under the action of inertia and gravity. An interface tracking technique was combined with a Euler-Euler model developed as part of a computational fluid dynamics (CFD) program. Emulsion physics were modelled by interface tracking between the gas and oil-in-water liquid mixture. Additional scalar transport equations were solved in order to account for the diffusive process between the oil and water. Various settling velocity models were used to consider the settling of the dispersed water phase in oil. Changes in viscosity and non-Newtonian emulsion behaviour were also considered. The study showed that the interface tracking technique accurately predicted flow when combined with an emulsion model designed to account for the settling of water in the oil phase. Further research is now being conducted to validate computational results against in situ measurements. 13 refs., 1 tab., 8 figs.
Toward exascale computing through neuromorphic approaches.
James, Conrad D.
2010-09-01
While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.
Pisani, Antonio; Riccio, Eleonora; Bellizzi, Vincenzo; Caputo, Donatella Luciana; Mozzillo, Giusi; Amato, Marco; Andreucci, Michele; Cianciaruso, Bruno; Sabbatini, Massimo
2016-06-01
The beneficial effects of dietary restriction of proteins in chronic kidney disease are widely recognized; however, poor compliance to prescribed low-protein diets (LPD) may limit their effectiveness. To help patients to adhere to the dietary prescriptions, interventions as education programmes and dietary counselling are critical, but it is also important to develop simple and attractive approaches to the LPD, especially when dietitians are not available. Therefore, we elaborated a simplified and easy to manage dietary approach consisting of 6 tips (6-tip diet, 6-TD) which could replace the standard, non-individualized LPD in Nephrology Units where dietary counselling is not available; hence, our working hypothesis was to evaluate the effects of such diet vs a standard moderately protein-restricted diet on metabolic parameters and patients' adherence. In this randomized trial, 57 CKD patients stage 3b-5 were randomly assigned (1:1) to receive the 6-TD (Group 6-TD) or a LPD containing 0.8 g/kg/day of proteins (Group LPD) for 6 months. The primary endpoint was to evaluate the effects of the two different diets on the main "metabolic" parameters and on patients' adherence (registration number NCT01865526). Both dietary regimens were associated with a progressive reduction in protein intake and urinary urea excretion compared to baseline, although the decrease was more pronounced in Group 6-TD. Effects on serum levels of urea nitrogen and urinary phosphate excretion were greater in Group 6-TD. Plasma levels of phosphate, bicarbonate and PTH, and urinary NaCl excretion remained stable in both groups throughout the study. 44 % of LPD patients were adherent to the dietary prescription vs 70 % of Group 6-TD. A simplified diet, consisting of 6 clear points easily managed by CKD patients, produced beneficial effects either on the metabolic profile of renal disease and on patients' adherence to the dietary plan, when compared to a standard LPD.
A simplified approach for ratcheting analysis in structures with elastic follow-up
Berton, M.N.; Cabrillat, M.T.
1991-01-01
In the framework of an elastic analysis, the RCC-MR design code uses the concept of the efficiency diagram to assess the behaviour of a structure relatively to ratcheting. This diagram was obtained from a lot of experimental results and allows to cover many reactor situations. However this approach needs to classify stresses between primary and secondary stresses and for a few cases, in particular for structures with significant elastic follow-up, this classification is not obvious. After a recall of elastic follow-up definition and a few considerations on the way to evaluate it, an approach is proposed to take it into account in an elastic analysis verifying the avoidance of ratcheting. An experimental program has been developed to study this interaction between elastic follow-up and ratcheting. The first results are presented together with interpretations with the proposed method. (author)
Pope, R.B.; Shappert, L.B.; Michelhaugh, R.D.; Boyle, R.W.; Cook, J.C.
1998-02-01
The US Department of Transportation (DOT) and the US Nuclear Regulatory Commission (NRC) have jointly prepared a comprehensive set of draft guidance for consignors and inspectors to use when applying the newly imposed regulatory requirements for low specific activity (LSA) material and surface contaminated objects (SCOs). The guidance is being developed to facilitate compliance with the new LSA material and SCO requirements, not to impose additional requirements. These new requirements represent, in some areas, significant departures from the manner in which packaging and transportation of these materials and objects were previously controlled. On occasion, it may be appropriate to use conservative approaches to demonstrate compliance with some of the requirements, ensuring that personnel are not exposed to radiation at unnecessary levels, so that exposures are kept as low as reasonably achievable (ALARA). In the draft guidance, one such approach would assist consignors preparing a shipment of a large number of SCOs in demonstrating compliance without unnecessarily exposing personnel. In applying this approach, users need to demonstrate that four conditions are met. These four conditions are used to categorize non-activated, contaminated objects as SCO-2. It is expected that, by applying this approach, it will be possible to categorize a large number of small contaminated objects as SCO-2 without the need for detailed, quantitative measurements of fixed, accessible contamination, or of total (fixed and non-fixed) contamination on inaccessible surfaces. The method, which is based upon reasoned argument coupled with limited measurements and the application of a sum of fractions rule, is described and examples of its use are provided
Beresford, N.A.; Wood, M.D.
2014-01-01
A major source of uncertainty in the estimation of radiation dose to wildlife is the prediction of internal radionuclide activity concentrations. Allometric (mass-dependent) relationships describing biological half-life (T 1/2b ) of radionuclides in organisms can be used to predict organism activity concentrations. The establishment of allometric expressions requires experimental data which are often lacking. An approach to predict the T 1/2b in homeothermic vertebrates has recently been proposed. In this paper we have adapted this to be applicable to reptiles. For Cs, Ra and Sr, over a mass range of 0.02–1.5 kg, resultant predictions were generally within a factor of 6 of reported values demonstrating that the approach can be used when measured T 1/2b data are lacking. However, the effect of mass on reptilian radionuclide T 1/2b is minimal. If sufficient measured data are available for a given radionuclide then it is likely that these would give a reasonable estimate of T 1/2b in any reptile species. - Highlights: • An allometric approach to predict radionuclide T 1/2b values in reptiles is derived. • Predictions are generally within a factor of six of measured values. • Radionuclide biological half-life is in-effect mass independent
Bury, Yannick; Lucas, Matthieu; Bonnaud, Cyril; Joly, Laurent; ISAE Team; Airbus Team
2014-11-01
We study numerically and experimentally the vortices that develop past a model geometry of a wing equipped with pylon-mounted engine at low speed/moderate incidence flight conditions. For such configuration, the presence of the powerplant installation under the wing initiates a complex, unsteady vortical flow field at the nacelle/pylon/wing junctions. Its interaction with the upper wing boundary layer causes a drop of aircraft performances. In order to decipher the underlying physics, this study is initially conducted on a simplified geometry at a Reynolds number of 200000, based on the chord wing and on the freestream velocity. Two configurations of angle of attack and side-slip angle are investigated. This work relies on unsteady Reynolds Averaged Navier Stokes computations, oil flow visualizations and stereoscopic Particle Image Velocimetry measurements. The vortex dynamics thus produced is described in terms of vortex core position, intensity, size and turbulent intensity thanks to a vortex tracking approach. In addition, the analysis of the velocity flow fields obtained from PIV highlights the influence of the longitudinal vortex initiated at the pylon/wing junction on the separation process of the boundary layer near the upper wing leading-edge.
Computational fluid dynamics a practical approach
Tu, Jiyuan; Liu, Chaoqun
2018-01-01
Computational Fluid Dynamics: A Practical Approach, Third Edition, is an introduction to CFD fundamentals and commercial CFD software to solve engineering problems. The book is designed for a wide variety of engineering students new to CFD, and for practicing engineers learning CFD for the first time. Combining an appropriate level of mathematical background, worked examples, computer screen shots, and step-by-step processes, this book walks the reader through modeling and computing, as well as interpreting CFD results. This new edition has been updated throughout, with new content and improved figures, examples and problems.
Computational neuropharmacology: dynamical approaches in drug discovery.
Aradi, Ildiko; Erdi, Péter
2006-05-01
Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.
De Rosa, Mattia; Bianco, Vincenzo; Scarpa, Federico; Tagliafico, Luca A.
2014-01-01
Highlights: • A dynamic model to estimate the energy performance of buildings is presented. • The model is validated against leading software packages, TRNSYS and Energy Plus. • Modified degree days are introduced to account for solar irradiation effects. - Abstract: Degree days represent a versatile climatic indicator which is commonly used in building energy performance analysis. In this context, the present paper proposes a simple dynamic model to simulate heating/cooling energy consumption in buildings. The model consists of several transient energy balance equations for external walls and internal air according to a lumped-capacitance approach and it has been implemented utilizing the Matlab/Simulink® platform. Results are validated by comparison to the outcomes of leading software packages, TRNSYS and Energy Plus. By using the above mentioned model, energy consumption for heating/cooling is analyzed in different locations, showing that for low degree days the inertia effect assumes a paramount importance, affecting the common linear behavior of the building consumption against the standard degree days, especially for cooling energy demand. Cooling energy demand at low cooling degree days (CDDs) is deeply analyzed, highlighting that in this situation other factors, such as solar irradiation, have an important role. To take into account these effects, a correction to CDD is proposed, demonstrating that by considering all the contributions the linear relationship between energy consumption and degree days is maintained
Beresford, N.A. [Lancaster Environment Centre, NERC Centre for Ecology and Hydrology, Lancaster (United Kingdom); Vives i Batlle, J. [Belgian Nuclear Research Centre, Mol (Belgium)
2013-11-15
The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of organisms other than man. Allometry presents a method of addressing the lack of empirical data on radionuclide transfer and metabolism for the many radionuclide-species combinations which may need to be considered. However, sufficient data across a range of species with different masses are required to establish allometric relationships and this is not always available. Here, an alternative allometric approach to predict the biological half-life of radionuclides in homoeothermic vertebrates which does not require such data is derived. Biological half-life values are predicted for four radionuclides and compared to available data for a range of species. All predictions were within a factor of five of the observed values when the model was parameterised appropriate to the feeding strategy of each species. This is an encouraging level of agreement given that the allometric models are intended to provide broad approximations rather than exact values. However, reasons why some radionuclides deviate from what would be anticipated from Kleiber's law need to be determined to allow a more complete exploitation of the potential of allometric extrapolation within radioecological models. (orig.)
Mohr Brigitte
2003-01-01
Full Text Available Abstract Background The analysis of complex cytogenetic databases of distinct leukaemia entities may help to detect rare recurring chromosome aberrations, minimal common regions of gains and losses, and also hot spots of genomic rearrangements. The patterns of the karyotype alterations may provide insights into the genetic pathways of disease progression. Results We developed a simplified computer readable cytogenetic notation (SCCN by which chromosome findings are normalised at a resolution of 400 bands. Lost or gained chromosomes or chromosome segments are specified in detail, and ranges of chromosome breakpoint assignments are recorded. Software modules were written to summarise the recorded chromosome changes with regard to the respective chromosome involvement. To assess the degree of karyotype alterations the ploidy levels and numbers of numerical and structural changes were recorded separately, and summarised in a complex karyotype aberration score (CKAS. The SCCN and CKAS were used to analyse the extend and the spectrum of additional chromosome aberrations in 94 patients with Philadelphia chromosome positive (Ph-positive acute lymphoblastic leukemia (ALL and secondary chromosome anomalies. Dosage changes of chromosomal material represented 92.1% of all additional events. Recurring regions of chromosome losses were identified. Structural rearrangements affecting (pericentromeric chromosome regions were recorded in 24.6% of the cases. Conclusions SCCN and CKAS provide unifying elements between karyotypes and computer processable data formats. They proved to be useful in the investigation of additional chromosome aberrations in Ph-positive ALL, and may represent a step towards full automation of the analysis of large and complex karyotype databases.
Tomić Miroljub V.
2008-01-01
Full Text Available In this paper a simplified procedure of an internal combustion engine in-cylinder pressure record analysis has been presented. The method is very easy for programming and provides quick evaluation of the gas temperature and the rate of combustion. It is based on the consideration proposed by Hohenberg and Killman, but enhances the approach by involving the rate of heat transferred to the walls that was omitted in the original approach. It enables the evaluation of the complete rate of heat released by combustion (often designated as “gross heat release rate” or “fuel chemical energy release rate”, not only the rate of heat transferred to the gas (which is often designated as “net heat release rate”. The accuracy of the method has been also analyzed and it is shown that the errors caused by the simplifications in the model are very small, particularly if the crank angle step is also small. A several practical applications on recorded pressure diagrams taken from both spark ignition and compression ignition engine are presented as well.
Computer networking a top-down approach
Kurose, James
2017-01-01
Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.
Hada, M.; Rhone, J.; Beitman, A.; Saganti, P.; Plante, I.; Ponomarev, A.; Slaba, T.; Patel, Z.
2018-01-01
The yield of chromosomal aberrations has been shown to increase in the lymphocytes of astronauts after long-duration missions of several months in space. Chromosome exchanges, especially translocations, are positively correlated with many cancers and are therefore a potential biomarker of cancer risk associated with radiation exposure. Although extensive studies have been carried out on the induction of chromosomal aberrations by low- and high-LET radiation in human lymphocytes, fibroblasts, and epithelial cells exposed in vitro, there is a lack of data on chromosome aberrations induced by low dose-rate chronic exposure and mixed field beams such as those expected in space. Chromosome aberration studies at NSRL will provide the biological validation needed to extend the computational models over a broader range of experimental conditions (more complicated mixed fields leading up to the galactic cosmic rays (GCR) simulator), helping to reduce uncertainties in radiation quality effects and dose-rate dependence in cancer risk models. These models can then be used to answer some of the open questions regarding requirements for a full GCR reference field, including particle type and number, energy, dose rate, and delivery order. In this study, we designed a simplified mixed field beam with a combination of proton, helium, oxygen, and iron ions with shielding or proton, helium, oxygen, and titanium without shielding. Human fibroblasts cells were irradiated with these mixed field beam as well as each single beam with acute and chronic dose rate, and chromosome aberrations (CA) were measured with 3-color fluorescent in situ hybridization (FISH) chromosome painting methods. Frequency and type of CA induced with acute dose rate and chronic dose rates with single and mixed field beam will be discussed. A computational chromosome and radiation-induced DNA damage model, BDSTRACKS (Biological Damage by Stochastic Tracks), was updated to simulate various types of CA induced by
Hybrid soft computing approaches research and applications
Dutta, Paramartha; Chakraborty, Susanta
2016-01-01
The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis, (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.
Kansal, Rohit; Talwar, Sangeeta; Yadav, Seema; Chaudhary, Sarika; Nawal, Ruchika
2014-01-01
The preparation of the root canal system is essential for a successful outcome in root canal treatment. The development of rotary nickel titanium instruments is considered to be an important innovation in the field of endodontics. During few last years, several new instrument systems have been introduced but the quest for simplifying the endodontic instrumentation sequence has been ongoing for almost 20 years, resulting in more than 70 different engine-driven endodontic instrumentation system...
Kijanka, P; Radecki, R; Packo, P; Staszewski, W J; Uhl, T
2013-01-01
Temperature has a significant effect on Lamb wave propagation. It is important to compensate for this effect when the method is considered for structural damage detection. The paper explores a newly proposed, very efficient numerical simulation tool for Lamb wave propagation modelling in aluminum plates exposed to temperature changes. A local interaction approach implemented with a parallel computing architecture and graphics cards is used for these numerical simulations. The numerical results are compared with the experimental data. The results demonstrate that the proposed approach could be used efficiently to produce a large database required for the development of various temperature compensation procedures in structural health monitoring applications. (paper)
Verma, Mansi; Lal, Devi; Saxena, Anjali; Anand, Shailly; Kaur, Jasvinder; Kaur, Jaspreet; Lal, Rup
2013-12-01
Actinobacteria are known for their diverse metabolism and physiology. Some are dreadful human pathogens whereas some constitute the natural flora for human gut. Therefore, the understanding of metabolic pathways is a key feature for targeting the pathogenic bacteria without disturbing the symbiotic ones. A big challenge faced today is multiple drug resistance by Mycobacterium and other pathogens that utilize alternative fluxes/effluxes. With the availability of genome sequence, it is now feasible to conduct the comparative in silico analysis. Here we present a simplified approach to compare metabolic pathways so that the species specific enzyme may be traced and engineered for future therapeutics. The analyses of four key carbohydrate metabolic pathways, i.e., glycolysis, pyruvate metabolism, tri carboxylic acid cycle and pentose phosphate pathway suggest the presence of alternative fluxes. It was found that the upper pathway of glycolysis was highly variable in the actinobacterial genomes whereas lower glycolytic pathway was highly conserved. Likewise, pentose phosphate pathway was well conserved in contradiction to TCA cycle, which was found to be incomplete in majority of actinobacteria. The clustering based on presence and absence of genes of these metabolic pathways clearly revealed that members of different genera shared identical pathways and, therefore, provided an easy method to identify the metabolic similarities/differences between pathogenic and symbiotic organisms. The analyses could identify isoenzymes and some key enzymes that were found to be missing in some pathogenic actinobacteria. The present work defines a simple approach to explore the effluxes in four metabolic pathways within the phylum actinobacteria. The analysis clearly reflects that actinobacteria exhibit diverse routes for metabolizing substrates. The pathway comparison can help in finding the enzymes that can be used as drug targets for pathogens without effecting symbiotic organisms
Advanced computational approaches to biomedical engineering
Saha, Punam K; Basu, Subhadip
2014-01-01
There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig
Computational Approaches to Nucleic Acid Origami.
Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo
2015-10-12
Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.
Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.
2015-12-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.
Introducing Computational Approaches in Intermediate Mechanics
Cook, David M.
2006-12-01
In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)
Interacting electrons theory and computational approaches
Martin, Richard M; Ceperley, David M
2016-01-01
Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.
Computational approaches to analogical reasoning current trends
Richard, Gilles
2014-01-01
Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...
Senay, G.B.; Budde, Michael; Verdin, J.P.; Melesse, Assefa M.
2007-01-01
Accurate crop performance monitoring and production estimation are critical for timely assessment of the food balance of several countries in the world. Since 2001, the Famine Early Warning Systems Network (FEWS NET) has been monitoring crop performance and relative production using satellite-derived data and simulation models in Africa, Central America, and Afghanistan where ground-based monitoring is limited because of a scarcity of weather stations. The commonly used crop monitoring models are based on a crop water-balance algorithm with inputs from satellite-derived rainfall estimates. These models are useful to monitor rainfed agriculture, but they are ineffective for irrigated areas. This study focused on Afghanistan, where over 80 percent of agricultural production comes from irrigated lands. We developed and implemented a Simplified Surface Energy Balance (SSEB) model to monitor and assess the performance of irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250m Normalized Difference Vegetation Index (NDVI) data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. We estimated seasonal actual evapotranspiration (ETa) over a period of six years (2000-2005) for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water-use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between
Schabel, Christoph; Horger, Marius; Kum, Sara [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany); Weisel, Katja [Department of Internal Medicine II – Hematology & Oncology, Eberhard-Karls-University Tuebingen, Otfried-Müller-Str. 5, 72076 Tuebingen (Germany); Fritz, Jan [Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, 600 N Wolfe St., Baltimore, MD 21287 (United States); Ioanoviciu, Sorin D. [Department of Internal Medicine, Clinical Municipal Hospital Timisoara, Gheorghe Dima Str. 5, 300079 Timisoara (Romania); Bier, Georg, E-mail: georg.bier@med.uni-tuebingen.de [Department of Neuroradiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany)
2016-12-15
Highlights: • A simplified method for response monitoring of multiple myeloma is proposed. • Medullary bone lesions of all limbs were included and analysed. • Diameters of ≥2 medullary bone lesions are sufficient for therapy monitoring. - Abstract: Introduction: Multiple myeloma is a malignant hematological disorder of the mature B-cell lymphocytes originating in the bone marrow. While therapy monitoring is still mainly based on laboratory biomarkers, the additional use of imaging has been advocated due to inaccuracies of serological biomarkers or in a-secretory myelomas. Non-enhanced CT and MRI have similar sensitivities for lesions in yellow marrow-rich bone marrow cavities with a favourable risk and cost-effectiveness profile of CT. Nevertheless, these methods are still limited by frequently high numbers of medullary lesions and its time consumption for proper evaluation. Objective: To establish simplified response criteria by correlating size and CT attenuation changes of medullary multiple myeloma lesions in the appendicular skeleton with the course of lytic bone lesions in the entire skeleton. Furthermore to evaluate these criteria with respect to established hematological myeloma-specific parameters for the prediction of treatment response to bortezomib or lenalidomide. Materials and methods: Non-enhanced reduced-dose whole-body CT examinations of 78 consecutive patients (43 male, 35 female, mean age 63.69 ± 9.2 years) with stage III multiple myeloma were retrospectively re-evaluated. On per patient basis, size and mean CT attenuation of 2–4 representative lesions in the limbs were measured at baseline and at a follow-up after a mean of 8 months. Results were compared with the course of lytical bone lesions as well with that of specific hematological biomarkers. Myeloma response was assessed according to the International Myeloma Working Group (IMWG) uniform response criteria. Testing for correlation between response of medullary lesions (Resp
Novel computational approaches characterizing knee physiotherapy
Wangdo Kim
2014-01-01
Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.
Music Genre Classification Systems - A Computational Approach
Ahrendt, Peter
2006-01-01
Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...
A computational approach to animal breeding.
Berger-Wolf, Tanya Y; Moore, Cristopher; Saia, Jared
2007-02-07
We propose a computational model of mating strategies for controlled animal breeding programs. A mating strategy in a controlled breeding program is a heuristic with some optimization criteria as a goal. Thus, it is appropriate to use the computational tools available for analysis of optimization heuristics. In this paper, we propose the first discrete model of the controlled animal breeding problem and analyse heuristics for two possible objectives: (1) breeding for maximum diversity and (2) breeding a target individual. These two goals are representative of conservation biology and agricultural livestock management, respectively. We evaluate several mating strategies and provide upper and lower bounds for the expected number of matings. While the population parameters may vary and can change the actual number of matings for a particular strategy, the order of magnitude of the number of expected matings and the relative competitiveness of the mating heuristics remains the same. Thus, our simple discrete model of the animal breeding problem provides a novel viable and robust approach to designing and comparing breeding strategies in captive populations.
Computation within the auxiliary field approach
Baeurle, S.A.
2003-01-01
Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies
A computational approach to negative priming
Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael
2007-09-01
Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).
Inagaki, Suetsugu; Adachi, Haruhiko; Sugihara, Hiroki; Katsume, Hiroshi; Ijichi, Hamao; Okamoto, Kunio; Hosoba, Minoru
1984-12-01
Background (BKG) correction is important but debatable in the measurement of Left ventricular ejection fraction (LVEF) with ECG gated blood pool scintigraphy. We devised a new simplified BKG processing (fixed BKG method) without BKG region-of-interest (ROI) assignment, and the accuracy and reproducibility were assessed in 25 patients with various heart diseases and 5 normal subjects by comparison with LVEF obtained by contrast levolgraphy (LVG-EF). Four additional protocols for LVEF measurement with BKG-ROI assignment were also assessed for reference. LVEF calculated using the fixed BKG ratio of 0.64 (BKG count rates were 64%) of end-diastolic count rates of LV) with ''Fixed'' LV-ROI was best correlated with LVG-EF (r = 0.936, p < 0.001) and most approximated (Fixed BKG ratio method EF: 61.1 +- 20.1, LVG-EF: 61.2 +- 20.4% (mean +- SD)) among other protocols. The wide availability of the fixed value of 0.64 was tested in various diseases, body size and end-diastolic volume by LVG, and the results were to be little influenced by them. Furthermore, fixed BKG method produced lower inter-and intra- observer variability than other protocols requiring BKG-ROI assignment, probably due to its simplified processing. In conclusion, fixed BKG ratio method simplifies the measurement of LVEF, and is feasible for automated processing and single probe system.
Jacobson, R H; Downing, D R; Lynch, T J
1982-11-15
A computer-assisted enzyme-linked immunosorbent assay (ELISA) system, based on kinetics of the reaction between substrate and enzyme molecules, was developed for testing large numbers of sera in laboratory applications. Systematic and random errors associated with conventional ELISA technique were identified leading to results formulated on a statistically validated, objective, and standardized basis. In a parallel development, an inexpensive system for field and veterinary office applications contained many of the qualities of the computer-assisted ELISA. This system uses a fluorogenic indicator (rather than the enzyme-substrate interaction) in a rapid test (15 to 20 minutes' duration) which promises broad application in serodiagnosis.
Richolt, J A; Rittmeister, M E
2006-01-01
Computer assisted navigation of the acetabular cup in THR requires reliable digitalisation of bony landmarks defining the frontal pelvic plane by user driven palpation. According to the system recommendations the subcutaneous fat should be held aside during epicutaneous digitalization. To improve intraoperative practicability this is often neglected in the symphysis area. In these cases the fat is just compressed and not pushed aside. In this study soft tissue thickness was assessed by ultrasound and pelvic geometry was measured in 72 patients to quantify potential misinterpretation of cup anteversion triggered by the simplified palpation. As reference we employed data of the same patients that had been acquired by recommended palpation. Anteversion misinterpretation averaged at 8.2 degrees with extremes from 2 to 24 degrees. There were no correlations between soft tissue thickness or misinterpretation and body weight, height and pelvic size. Anteversion misinterpretation was highly significant worse compared to the reference data. In 31 % of the patients the anteversion misinterpretation of a navigation system would have been wrong by over 10 degrees and in 81 % over 5 degrees . Therefore the simplified palpation should not be utilized. For epicutaneous digitalization of the bony landmarks it is mandatory to push the subcutaneous fat aside.
Blueprinting Approach in Support of Cloud Computing
Willem-Jan van den Heuvel
2012-03-01
Full Text Available Current cloud service offerings, i.e., Software-as-a-service (SaaS, Platform-as-a-service (PaaS and Infrastructure-as-a-service (IaaS offerings are often provided as monolithic, one-size-fits-all solutions and give little or no room for customization. This limits the ability of Service-based Application (SBA developers to configure and syndicate offerings from multiple SaaS, PaaS, and IaaS providers to address their application requirements. Furthermore, combining different independent cloud services necessitates a uniform description format that facilitates the design, customization, and composition. Cloud Blueprinting is a novel approach that allows SBA developers to easily design, configure and deploy virtual SBA payloads on virtual machines and resource pools on the cloud. We propose the Blueprint concept as a uniform abstract description for cloud service offerings that may cross different cloud computing layers, i.e., SaaS, PaaS and IaaS. To support developers with the SBA design and development in the cloud, this paper introduces a formal Blueprint Template for unambiguously describing a blueprint, as well as a Blueprint Lifecycle that guides developers through the manipulation, composition and deployment of different blueprints for an SBA. Finally, the empirical evaluation of the blueprinting approach within an EC’s FP7 project is reported and an associated blueprint prototype implementation is presented.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
Kydonieos, M; Folgueras, A; Florescu, L; Cybulski, T; Marinos, N; Thompson, G; Sayeed, A [Elekta Limited, Crawley, West Sussex (United Kingdom); Rozendaal, R; Olaciregui-Ruiz, I [Netherlands Cancer Institute - Antoni van Leeuwenhoek, Amsterdam, Noord-Holland (Netherlands); Subiel, A; Patallo, I Silvestre [National Physical Laboratory, London (United Kingdom)
2016-06-15
Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems, Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning
Stec, Sebastian; Śledź, Janusz; Mazij, Mariusz; Raś, Małgorzata; Ludwik, Bartosz; Chrabąszcz, Michał; Śledź, Arkadiusz; Banasik, Małgorzata; Bzymek, Magdalena; Młynarczyk, Krzysztof; Deutsch, Karol; Labus, Michał; Śpikowski, Jerzy; Szydłowski, Lesław
2014-08-01
Although the "near-zero-X-Ray" or "No-X-Ray" catheter ablation (CA) approach has been reported for treatment of various arrhythmias, few prospective studies have strictly used "No-X-Ray," simplified 2-catheter approaches for CA in patients with supraventricular tachycardia (SVT). We assessed the feasibility of a minimally invasive, nonfluoroscopic (MINI) CA approach in such patients. Data were obtained from a prospective multicenter CA registry of patients with regular SVTs. After femoral access, 2 catheters were used to create simple, 3D electroanatomic maps and to perform electrophysiologic studies. Medical staff did not use lead aprons after the first 10 MINI CA cases. A total of 188 patients (age, 45 ± 21 years; 17% 0.05), major complications (0% vs. 0%, P > 0.05) and acute (98% vs. 98%, P > 0.05) and long-term (93% vs. 94%, P > 0.05) success rates were similar in the "No-X-Ray" and control groups. Implementation of a strict "No-X-Ray, simplified 2-catheter" CA approach is safe and effective in majority of the patients with SVT. This modified approach for SVTs should be prospectively validated in a multicenter study. © 2014 Wiley Periodicals, Inc.
The simplified P3 approach on a trigonal geometry in the nodal reactor code DYN3D
Duerigen, S.; Fridman, E.
2011-01-01
DYN3D is a three-dimensional nodal diffusion code for steady-state and transient analyses of Light-Water Reactors with square and hexagonal fuel assembly geometries. Currently, several versions of the DYN3D code are available including a multi-group diffusion and a simplified P 3 (SP 3 ) neutron transport option. In this work, the multi-group SP 3 method based on trigonal-z geometry was developed. The method is applicable to the analysis of reactor cores with hexagonal fuel assemblies and allows flexible mesh refinement, which is of particular importance for WWER-type Pressurized Water Reactors as well as for innovative reactor concepts including block type High-Temperature Reactors and Sodium Fast Reactors. In this paper, the theoretical background for the trigonal SP 3 methodology is outlined and the results of a preliminary verification analysis are presented by means of a simplified WWER-440 core test example. The accordant cross sections and reference solutions were produced by the Monte Carlo code SERPENT. The DYN3D results are in good agreement with the reference solutions. The average deviation in the nodal power distribution is about 1%. (Authors)
Image-Based Edge Bundles : Simplified Visualization of Large Graphs
Telea, A.; Ersoy, O.
2010-01-01
We present a new approach aimed at understanding the structure of connections in edge-bundling layouts. We combine the advantages of edge bundles with a bundle-centric simplified visual representation of a graph's structure. For this, we first compute a hierarchical edge clustering of a given graph
1986-03-01
A study on radiation dose control in packages of radioactive waste from nuclear facilities, hospitals and industries, such as sources of Ra-226, Co-60, Ir-192 and Cs-137, is presented. The MAPA and MAPAM computer codes, based on point Kernel theory for calculating doses of several source-shielding type configurations, aiming to assure the safe transport conditions for these sources, was developed. The validation of the code for point sources, using the values provided by NCRP, for the thickness of lead and concrete shieldings, limiting the dose at 100 Mrem/hr for several distances from the source to the detector, was carried out. The validation for non point sources was carried out, measuring experimentally radiation dose from packages developed by Brazilian CNEN/S.P. for removing the sources. (M.C.K.) [pt
May, Carl R
2011-09-30
Abstract Background Normalization Process Theory (NPT) can be used to explain implementation processes in health care relating to new technologies and complex interventions. This paper describes the processes by which we developed a simplified version of NPT for use by clinicians, managers, and policy makers, and which could be embedded in a web-enabled toolkit and on-line users manual. Methods Between 2006 and 2010 we undertook four tasks. (i) We presented NPT to potential and actual users in multiple workshops, seminars, and presentations. (ii) Using what we discovered from these meetings, we decided to create a simplified set of statements and explanations expressing core constructs of the theory (iii) We circulated these statements to a criterion sample of 60 researchers, clinicians and others, using SurveyMonkey to collect qualitative textual data about their criticisms of the statements. (iv) We then reconstructed the statements and explanations to meet users\\' criticisms, embedded them in a web-enabled toolkit, and beta tested this \\'in the wild\\'. Results On-line data collection was effective: over a four week period 50\\/60 participants responded using SurveyMonkey (40\\/60) or direct phone and email contact (10\\/60). An additional nine responses were received from people who had been sent the SurveyMonkey form by other respondents. Beta testing of the web enabled toolkit produced 13 responses, from 327 visits to http:\\/\\/www.normalizationprocess.org. Qualitative analysis of both sets of responses showed a high level of support for the statements but also showed that some statements poorly expressed their underlying constructs or overlapped with others. These were rewritten to take account of users\\' criticisms and then embedded in a web-enabled toolkit. As a result we were able translate the core constructs into a simplified set of statements that could be utilized by non-experts. Conclusion Normalization Process Theory has been developed through
Murray Elizabeth
2011-09-01
Full Text Available Abstract Background Normalization Process Theory (NPT can be used to explain implementation processes in health care relating to new technologies and complex interventions. This paper describes the processes by which we developed a simplified version of NPT for use by clinicians, managers, and policy makers, and which could be embedded in a web-enabled toolkit and on-line users manual. Methods Between 2006 and 2010 we undertook four tasks. (i We presented NPT to potential and actual users in multiple workshops, seminars, and presentations. (ii Using what we discovered from these meetings, we decided to create a simplified set of statements and explanations expressing core constructs of the theory (iii We circulated these statements to a criterion sample of 60 researchers, clinicians and others, using SurveyMonkey to collect qualitative textual data about their criticisms of the statements. (iv We then reconstructed the statements and explanations to meet users' criticisms, embedded them in a web-enabled toolkit, and beta tested this 'in the wild'. Results On-line data collection was effective: over a four week period 50/60 participants responded using SurveyMonkey (40/60 or direct phone and email contact (10/60. An additional nine responses were received from people who had been sent the SurveyMonkey form by other respondents. Beta testing of the web enabled toolkit produced 13 responses, from 327 visits to http://www.normalizationprocess.org. Qualitative analysis of both sets of responses showed a high level of support for the statements but also showed that some statements poorly expressed their underlying constructs or overlapped with others. These were rewritten to take account of users' criticisms and then embedded in a web-enabled toolkit. As a result we were able translate the core constructs into a simplified set of statements that could be utilized by non-experts. Conclusion Normalization Process Theory has been developed through
Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.
2011-01-01
Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.
a Recursive Approach to Compute Normal Forms
HSU, L.; MIN, L. J.; FAVRETTO, L.
2001-06-01
Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.
McFedries, Paul
2012-01-01
The easiest way for visual learners to get started with Windows 8 The popular Simplified series makes visual learning easier than ever, and with more than 360,000 copies sold, previous Windows editions are among the bestselling Visual books. This guide goes straight to the point with easy-to-follow, two-page tutorials for each task. With full-color screen shots and step-by-step directions, it gets beginners up and running on the newest version of Windows right away. Learn to work with the new interface and improved Internet Explorer, manage files, share your computer, and much more. Perfect fo
Ahn, Kwang Il; Yang, Joon Eon; Ha, Jae Joo
2003-01-01
Expert judgment is frequently employed in the search for the solution to various engineering and decision-making problems where relevant data is not sufficient or where there is little consensus as to the correct models to apply. When expert judgments are required to solve the underlying problem, our main concern is how to formally derive their technical expertise and their personal degree of familiarity about the related questions. Formal methods for gathering judgments from experts and assessing the effects of the judgments on the results of the analysis have been developed in a variety of ways. The most important interest of such methods is to establish the robustness of an expert's knowledge upon which the elicitation of judgments is made and an effective trace of the elicitation process as possible as one can. While the resultant expert judgments can remain to a large extent substantiated with formal elicitation methods, their applicability however is often limited due to restriction of available resources (e.g., time, budget, and number of qualified experts, etc) as well as a scope of the analysis. For this reason, many engineering and decision-making problems have not always performed with a formal/structured pattern, but rather relied on a pertinent transition of the formal process to the simplified approach. The purpose of this paper is (a) to address some insights into the balanced use of formally structured and simplified approaches for the explicit use of expert judgments under resource constraints and (b) to discuss related decision-theoretic issues
Computer networks ISE a systems approach
Peterson, Larry L
2007-01-01
Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p
CREATIVE APPROACHES TO COMPUTER SCIENCE EDUCATION
V. B. Raspopov
2010-04-01
Full Text Available Using the example of PPS «Toolbox of multimedia lessons «For Children About Chopin» we demonstrate the possibility of involving creative students in developing the software packages for educational purposes. Similar projects can be assigned to school and college students studying computer sciences and informatics, and implemented under the teachers’ supervision, as advanced assignments or thesis projects as a part of a high school course IT or Computer Sciences, a college course of Applied Scientific Research, or as a part of preparation for students’ participation in the Computer Science competitions or IT- competitions of Youth Academy of Sciences ( MAN in Russian or in Ukrainian.
R. Archetti
2011-10-01
Full Text Available The operating conditions of urban drainage networks during storm events depend on the hydraulic conveying capacity of conduits and also on downstream boundary conditions. This is particularly true in coastal areas where the level of the receiving water body is directly or indirectly affected by tidal or wave effects. In such cases, not just different rainfall conditions (varying intensity and duration, but also different sea-levels and their effects on the network operation should be considered. This paper aims to study the behaviour of a seaside town storm sewer network, estimating the threshold condition for flooding and proposing a simplified method to assess the urban flooding severity as a function of climate variables. The case study is a portion of the drainage system of Rimini (Italy, implemented and numerically modelled by means of InfoWorks CS code. The hydraulic simulation of the sewerage system identified the percentage of nodes of the drainage system where flooding is expected to occur. Combining these percentages with both climate variables' values has lead to the definition of charts representing the combined degree of risk "rainfall-sea level" for the drainage system under investigation. A final comparison between such charts and the results obtained from a one-year rainfall-sea level time series has demonstrated the reliability of the analysis.
Computer science approach to quantum control
Janzing, D.
2006-01-01
Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable
Grimme, Stefan; Bannwarth, Christoph
2016-01-01
The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first
Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph [Mulliken Center for Theoretical Chemistry, Institut für Physikalische und Theoretische Chemie, Rheinische Friedrich-Wilhelms Universität Bonn, Beringstraße 4, 53115 Bonn (Germany)
2016-08-07
The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first
Computational and Experimental Approaches to Visual Aesthetics
Brachmann, Anselm; Redies, Christoph
2017-01-01
Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491
Thorne, Lawrence R.
2011-01-01
I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...
Arbat, G.; Pujol, J.; Pelegri, M.; Puig-Bargues, J.; Duran-Ros, M.; Ramirez de Cartagena, F.
2013-05-01
The number of private gardens has increased in recent years, creating a more pleasant urban model, but not without having an environmental impact, including increased energy consumption, which is the focus of this study. The estimation of costs and energy consumption for the generic typology of private urban gardens is based on two simplifying assumptions: square geometry with surface areas from 25 to 500 m{sup 2} and hydraulic design with a single pipe. In total, eight sprinkler models have been considered, along with their possible working pressures, and 31 pumping units grouped into 5 series that adequately cover the range of required flow rates and pressures, resulting in 495 hydraulic designs repeated for two climatically different locations in the Spanish Mediterranean area (Girona and Elche). Mean total irrigation costs for the locality with lower water needs (Girona) and greater needs (Elche) were {epsilon} 2,974 ha{sup -}1 yr-1 and {epsilon}3,383 ha{sup -}1 yr{sup -}1, respectively. Energy costs accounted for 11.4% of the total cost for the first location, and 23.0% for the second. While a suitable choice of the hydraulic elements of the setup is essential, as it may provide average energy savings of 77%, due to the low energy cost in relation to the cost of installation, the potential energy savings do not constitute a significant incentive for the irrigation system design. The low efficiency of the pumping units used in this type of garden is the biggest obstacle and constraint to achieving a high quality energy solution. (Author) 32 refs.
G. Arbat
2013-02-01
Full Text Available The number of private gardens has increased in recent years, creating a more pleasant urban model, but not without having an environmental impact, including increased energy consumption, which is the focus of this study. The estimation of costs and energy consumption for the generic typology of private urban gardens is based on two simplifying assumptions: square geometry with surface areas from 25 to 500 m2 and hydraulic design with a single pipe. In total, eight sprinkler models have been considered, along with their possible working pressures, and 31 pumping units grouped into 5 series that adequately cover the range of required flow rates and pressures, resulting in 495 hydraulic designs repeated for two climatically different locations in the Spanish Mediterranean area (Girona and Elche. Mean total irrigation costs for the locality with lower water needs (Girona and greater needs (Elche were € 2,974 ha-1 yr-1 and € 3,383 ha-1 yr-1, respectively. Energy costs accounted for 11.4% of the total cost for the first location, and 23.0% for the second. While a suitable choice of the hydraulic elements of the setup is essential, as it may provide average energy savings of 77%, due to the low energy cost in relation to the cost of installation, the potential energy savings do not constitute a significant incentive for the irrigation system design. The low efficiency of the pumping units used in this type of garden is the biggest obstacle and constraint to achieving a high quality energy solution.
Pizzol, Massimo; Bulle, Cécile; Thomsen, Marianne
2012-04-01
In order to estimate the total exposure to the lead emissions from a municipal waste combustion plant in Denmark, the indirect pathway via ingestion of lead deposited on soil has to be quantified. Multi-media fate models developed for both Risk Assessment (RA) and Life Cycle Assessment (LCA) can be used for this purpose, but present high uncertainties in the assessment of metal's fate. More sophisticated and metal-specific geochemical models exist, that could lower the uncertainties by e.g. accounting for metal speciation, but they require a large amount of data and are unpractical to combine broadly with other fate and dispersion models. In this study, a Simplified Fate & Speciation Model (SFSM) is presented, that is based on the parsimony principle: "as simple as possible, as complex as needed", and that can be used for indirect human exposure assessment in different context like RA and regionalized LCA. SFSM couples traditional multi-media mass balances with empirical speciation models in a tool that has a simple theoretical framework and that is not data-intensive. The model calculates total concentration, dissolved concentration, and free ion activity of Cd, Cu, Ni, Pb and Zn in different soil layers, after accounting for metal deposition and dispersion. The model is tested for these five metals by using data from peer reviewed literature. Results show good accordance between measured and calculated values (factor of 3). The model is used to predict the human exposure via soil to lead initially emitted into air by the waste combustion plant and both the lead cumulative exposure and intake fraction are calculated. Copyright Â© 2012 Elsevier B.V. All rights reserved.
Computational Approaches to Chemical Hazard Assessment
Luechtefeld, Thomas; Hartung, Thomas
2018-01-01
Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Marmel, Elaine
2013-01-01
A basic introduction to learn Office 2013 quickly, easily, and in full color Office 2013 has new features and tools to master, and whether you're upgrading from an earlier version or using the Office applications for the first time, you'll appreciate this simplified approach. Offering a clear, visual style of learning, this book provides you with concise, step-by-step instructions and full-color screen shots that walk you through the applications in the Microsoft Office 2013 suite: Word, Excel, PowerPoint, Outlook, and Publisher.Shows you how to tackle dozens of Office 2013
Approaching Engagement towards Human-Engaged Computing
Niksirat, Kavous Salehzadeh; Sarcar, Sayan; Sun, Huatong
2018-01-01
Debates regarding the nature and role of HCI research and practice have intensified in recent years, given the ever increasingly intertwined relations between humans and technologies. The framework of Human-Engaged Computing (HEC) was proposed and developed over a series of scholarly workshops to...
Computational and mathematical approaches to societal transitions
J.S. Timmermans (Jos); F. Squazzoni (Flaminio); J. de Haan (Hans)
2008-01-01
textabstractAfter an introduction of the theoretical framework and concepts of transition studies, this article gives an overview of how structural change in social systems has been studied from various disciplinary perspectives. This overview first leads to the conclusion that computational and
A Constructive Induction Approach to Computer Immunology
1999-03-01
LVM98] Lamont, Gary B., David A. Van Veldhuizen , and Robert E Marmelstein, A Distributed Architecture for a Self-Adaptive Computer Virus...Artificial Intelligence, Herndon, VA, 1995. [MVL98] Marmelstein, Robert E., David A. Van Veldhuizen , and Gary B. Lamont. Modeling & Analysis
Tayapiwatana Chatchai
2008-02-01
Full Text Available Abstract Background The recognition that human tumors stimulate the production of autoantibodies has initiated the use of this immune response as serological markers for the early diagnosis and management of cancer. The enzyme-linked immunosorbent assay (ELISA is the most common method used in detecting autoantibodies, which involves coating the microtiter plate with the tumor associated antigen (TAA of interest and allowing serum antibodies to bind. The patient's sample is directly in contact with the coating antigen so the protein used for coating must be pure to avoid non-specific binding. In this study, a simplified method to selectively and specifically immobilize TAAs onto microtiter plates in order to detect circulating autoantibodies in cancer patients without prior purification process was described. Wild type full-length p53 protein was produced in fusion with biotin carboxyl carrier peptide (BCCP or hexahistidine [(His6] using pAK400 and pET15b(+ vectors, respectively. The recombinant p53 fusion protein produced was then subjected to react with either a commercial p53 monoclonal antibody (mAb or sera from lung cancer patients and healthy volunteers in an enzyme-linked immunosorbent assay (ELISA format. Results Both of the immobilized p53 fusion proteins as well as the purified (His6-p53 fusion protein had a similar dose response of detection to a commercial p53 mAb (DO7. When the biotinylated p53-BCCP fusion protein was used as an antigen to detect p53 autoantibodies in clinical samples, the result showed that human serum reacted strongly to avidin-coated microwell even in the absence of the biotinylated p53-BCCP fusion protein, thus compromised its ability to differentiate weakly positive sera from those that were negative. In contrast, the (His6-p53 protein immobilized directly onto Ni+ coated microplate was able to identify the p53 autoantibody positive serum. In addition, its reactivity to clinical serum samples highly correlated
Tisdell, Christopher C.
2017-07-01
Knowing an equation has a unique solution is important from both a modelling and theoretical point of view. For over 70 years, the approach to learning and teaching 'well posedness' of initial value problems (IVPs) for second- and higher-order ordinary differential equations has involved transforming the problem and its analysis to a first-order system of equations. We show that this excursion is unnecessary and present a direct approach regarding second- and higher-order problems that does not require an understanding of systems.
Computational and Game-Theoretic Approaches for Modeling Bounded Rationality
L. Waltman (Ludo)
2011-01-01
textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic
2016-07-01
wa ll (k ip s) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 100 200 300 400 500 600 ERDC/ITL TR-16-1 7 An alternative formulation incorporated within the PC...Vetterling, and B. P. Flannery. 1996. Numerical recipes in Fortran 77, the art of scientific computing, Second edition, New York, NY. Reddy, J. N...8544. 1 TO 3 70 UNSP S 0.6 0.6 500 . 500 . N 1 TO 3 80 PMAXMOM 324. 324. 1 TO 3 CPGA LOAD-DISPLACEMENT OUTPUT FOR RUN #1 Pinned-head piles PILE CAP
Tisdell, Christopher C.
2017-01-01
Knowing an equation has a unique solution is important from both a modelling and theoretical point of view. For over 70 years, the approach to learning and teaching "well posedness" of initial value problems (IVPs) for second- and higher-order ordinary differential equations has involved transforming the problem and its analysis to a…
Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools
Caspersen, Michael E.; Nowack, Palle
2014-01-01
Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...
Machine learning and computer vision approaches for phenotypic profiling.
Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J
2017-01-02
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Computational approach to large quantum dynamical problems
Friesner, R.A.; Brunet, J.P.; Wyatt, R.E.; Leforestier, C.; Binkley, S.
1987-01-01
The organizational structure is described for a new program that permits computations on a variety of quantum mechanical problems in chemical dynamics and spectroscopy. Particular attention is devoted to developing and using algorithms that exploit the capabilities of current vector supercomputers. A key component in this procedure is the recursive transformation of the large sparse Hamiltonian matrix into a much smaller tridiagonal matrix. An application to time-dependent laser molecule energy transfer is presented. Rate of energy deposition in the multimode molecule for systematic variations in the molecular intermode coupling parameters is emphasized
A complex network approach to cloud computing
Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura
2016-01-01
Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )
SHIVGAMI : Simplifying tHe titanIc blastx process using aVailable GAthering of coMputational unIts
Naman Mangukia
2017-10-01
Full Text Available Assembling novel genomes from scratch is a never ending process unless and until the homo sapiens cover all the living organisms! On top of that, this denovo approach is employed by RNASeq and Metagenomics analysis. Functional identification of the scaffolds or transcripts from such drafted assemblies is a substantial step routinely employes a well-known BlastX program which facilitates a user to search DNA query against NCBI-Protein (NR:~120Gb database. In spite of having multicore-processing option, blastX is an elongated process for the bulk of lengthy Queryinputs. Tremendous efforts are constantly being applied to solve this problem by increasing computational power, GPU-Based computing, Cloud computing and Hadoop based approach which ultimately requires gigantic cost in terms of money and processing. To address this issue, here we have come up with SHIVGAMI, which automates the entire process using perl and shell scripts, which divide, distribute and process the input FASTA sequences as per the CPU-cores availability amongst the computational units individually. Linux operating system, NR database and blastX program installations are prerequisites for each system. The beauty of this stand-alone automation program SHIVGAMI is it requires the LAN connection exactly twice: During ‘query distribution’ and at the time of ‘proces completion’. In initial phase, it divides the fasta sequences according to the individual computer's core-capability. Then it will evenly distribute all the data along with small automation scripts which will run the blastX process to the respective computational unit and send back the results file to the master computer. The master computer finally combines and compiles the files into a single result. This simple automation converts a computer lab into a GRID without investment of any software, hardware and man-power. In short, SHIVGAMI is a time and cost savior tool for all users starting from commercial firm
Simplified design of filter circuits
Lenk, John
1999-01-01
Simplified Design of Filter Circuits, the eighth book in this popular series, is a step-by-step guide to designing filters using off-the-shelf ICs. The book starts with the basic operating principles of filters and common applications, then moves on to describe how to design circuits by using and modifying chips available on the market today. Lenk's emphasis is on practical, simplified approaches to solving design problems.Contains practical designs using off-the-shelf ICsStraightforward, no-nonsense approachHighly illustrated with manufacturer's data sheets
Thiry, Yves; Redon, Paul-Olivier; Gustafsson, Malin; Marang, Laura; Bastviken, David
2013-04-01
Chlorine is very soluble at a global scale with chloride (Cl-), the dominating form. Because of its high mobility, chlorine is usually perceived as a good conservative tracer in hydrological studies and by analogy as little reactive in biosphere. Since 36Cl can be considered to have the same behaviour than stable Cl, a good knowledge of chlorine distribution between compartments of terrestrial ecosystems is sufficient to calibrate a specific activity model which supposes rapid dilution of 36Cl within the large pool of stable Cl and isotopic equilibrium between compartments. By assuming 36Cl redistribution similar to that of stable Cl at steady-state, specific activity models are simplified interesting tools for regulatory purposes in environmental safety assessment, especially in case of potential long term chronic contamination of agricultural food chain (IAEA, 2010). In many other more complex scenarios (accidental acute release, intermediate time frame, and contrasted natural ecosystems), new information and tools are necessary for improving (radio-)ecological realism, which entails a non-conservative behavior of chlorine. Indeed observed dynamics of chlorine in terrestrial ecosystems is far from a simple equilibrium notably because of natural processes of organic matter (SOM) chlorination mainly occurring in surface soils (Öberg, 1998) and mediated by microbial activities on a large extent (Bastviken et al. 2007). Our recent studies have strengthened the view that an organic cycle for chlorine should now be recognized, in addition to its inorganic cycle. Major results showed that: organochlorine (Clorg) formation occurs in all type of soils and ecosystems (culture, pasture, forest), leading to an average fraction of the total Cl pool in soil of about 80 % (Redon et al., 2012), chlorination in more organic soils over time leads to a larger Clorg pool and in turn to a possible high internal supply of inorganic chlorine (Clin) upon dechlorination. (Gustafsson et
The fundamentals of computational intelligence system approach
Zgurovsky, Mikhail Z
2017-01-01
This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...
Q-P Wave traveltime computation by an iterative approach
Ma, Xuxin; Alkhalifah, Tariq Ali
2013-01-01
In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.
Integration of case study approach, project design and computer ...
Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...
Fractal approach to computer-analytical modelling of tree crown
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
Hong Chen
2013-01-01
Full Text Available Based on the decomposition of the evolution processes of the urban expressway capacity after traffic accidents and the influence factors analysis, an approach for estimating the capacity has been proposed. Firstly, the approach introduces the Decision Tree ID algorithm, solves the accident delay time of different accident types by the Information Gain Value, and determines congestion dissipation time by the Traffic Flow Wave Theory. Secondly, taking the accident delay time as the observation cycle, the maximum number of the vehicles through the accident road per unit time was considered as its capacity. Finally, the attenuation simulation of the capacity for different accident types was calculated by the VISSIM software. The simulation results suggest that capacity attenuation of vehicle anchor is minimal and the rate is 30.074%; the next is vehicles fire, rear-end, and roll-over, and the rate is 38.389%, 40.204%, and 43.130%, respectively; the capacity attenuation of vehicle collision is the largest, and the rate is 50.037%. Moreover, the further research shows that the accident delay time is proportional to congestion dissipation time, time difference, and the ratio between them, but it is an inverse relationship with the residual capacity of urban expressway.
Julia Chernova
2016-07-01
Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the
Pollock, S.G.; Watson, D.D.; Gibson, R.S.; Beller, G.A.; Kaul, S.
1989-01-01
This study describes a simplified approach for the interpretation of electrocardiographic and thallium-201 imaging data derived from the same patient during exercise. The 383 patients in this study had also undergone selective coronary arteriography within 3 months of the exercise test. This matrix approach allows for multiple test outcomes (both tests positive, both negative, 1 test positive and 1 negative) and multiple disease states (no coronary artery disease vs 1-vessel vs multivessel coronary artery disease). Because this approach analyzes the results of 2 test outcomes simultaneously rather than serially, it also negates the lack of test independence, if such an effect is present. It is also demonstrated that ST-segment depression on the electrocardiogram and defects on initial thallium-201 images provide conditionally independent information regarding the presence of coronary artery disease in patients without prior myocardial infarction. In contrast, ST-segment depression on the electrocardiogram and redistribution on the delayed thallium-201 images may not provide totally independent information regarding the presence of exercise-induced ischemia in patients with or without myocardial infarction
Bioinspired Computational Approach to Missing Value Estimation
Israel Edem Agbehadji
2018-01-01
Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.
Computational fluid dynamics in ventilation: Practical approach
Fontaine, J. R.
The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.
Burns, Douglas A.; Riva-Murray, Karen
2018-01-01
Simple screening approaches for the neurotoxicant methylmercury (MeHg) in aquatic ecosystems may be helpful in risk assessments of natural resources. We explored the development of such an approach in the Adirondack Mountains of New York, USA, a region with high levels of MeHg bioaccumulation. Thirty-six perennial streams broadly representative of 1st and 2nd order streams in the region were sampled during summer low flow and analyzed for several solutes and for Hg concentrations in fish. Several landscape and chemical metrics that are typically strongly related to MeHg concentrations in aquatic biota were explored for strength of association with fish Hg concentrations. Data analyses were based on site mean length-normalized and standardized Hg concentrations (assumed to be dominantly MeHg) in whole juvenile and adult Brook Trout Salvelinus fontinalis, Creek Chub Semotilus atromaculatus, Blacknose Dace Rhinichthys atratulus, and Central Mudminnow Umbra limi, as well as on multi-species z-scores. Surprisingly, none of the landscape metrics was related significantly to regional variation in fish Hg concentrations or to z-scores across the study streams. In contrast, several chemical metrics including dissolved organic carbon (DOC) concentrations, sulfate concentrations (SO42−), pH, ultra-violet absorbance (UV254), and specific ultra-violet absorbance were significantly related to regional variation in fish Hg concentrations. A cluster analysis based on DOC, SO42−, and pH identified three distinct groups of streams: (1) high DOC, acidic streams, (2) moderate DOC, slightly acidic streams, and (3) low DOC circum-neutral streams with relatively high SO42−. Preliminary analysis indicated no significant difference in fish Hg z-scores between the moderate and high DOC groups, so these were combined for further analysis. The resulting two groups showed strong differences (p 6.9 mg/L, SO42− 0.31 cm−1 were tested as thresholds to identify Adirondack
Ding, Bo; Squicciarini, Giacomo; Thompson, David; Corradi, Roberto
2018-06-01
Curve squeal is one of the most annoying types of noise caused by the railway system. It usually occurs when a train or tram is running around tight curves. Although this phenomenon has been studied for many years, the generation mechanism is still the subject of controversy and not fully understood. A negative slope in the friction curve under full sliding has been considered to be the main cause of curve squeal for a long time but more recently mode coupling has been demonstrated to be another possible explanation. Mode coupling relies on the inclusion of both the lateral and vertical dynamics at the contact and an exchange of energy occurs between the normal and the axial directions. The purpose of this paper is to assess the role of the mode-coupling and falling-friction mechanisms in curve squeal through the use of a simple approach based on practical parameter values representative of an actual situation. A tramway wheel is adopted to study the effect of the adhesion coefficient, the lateral contact position, the contact angle and the damping ratio. Cases corresponding to both inner and outer wheels in the curve are considered and it is shown that there are situations in which both wheels can squeal due to mode coupling. Additionally, a negative slope is introduced in the friction curve while keeping active the vertical dynamics in order to analyse both mechanisms together. It is shown that, in the presence of mode coupling, the squealing frequency can differ from the natural frequency of either of the coupled wheel modes. Moreover, a phase difference between wheel vibration in the vertical and lateral directions is observed as a characteristic of mode coupling. For both these features a qualitative comparison is shown with field measurements which show the same behaviour.
Kalyan Mondal
2014-12-01
Full Text Available Teacher recruitment is a multi-criteria group decisionmaking process involving subjectivity, imprecision, and fuzziness that can be suitably represented by neutrosophic sets. Neutrosophic set, a generalization of fuzzy sets is characterized by a truth-membership function, falsity-membership function and an indeterminacy-membership function. These functions are real standard or non-standard subsets of ] 0-, 1+[ .There is no restriction on the sum of the functions, so the sum lies between ]0-, 3+[. A neutrosophic approach is a more general and suitable way to deal with imprecise information, when compared to a fuzzy set. The purpose of this study is to develop a neutrosophic multi-criteria group decision-making model based on hybrid scoreaccuracy functions for teacher recruitment in higher education. Eight criteria obtained from expert opinions are considered for recruitment process. The criteria are namely academic performance index, teaching aptitude, subject knowledge, research experience, leadership quality, personality, management capacity, and personal values. In this paper we use the score and accuracy functions and the hybrid score-accuracy functions of single valued neutrosophic numbers (SVNNs and ranking method for SVNNs. Then, multi-criteria group decision-making method with unknown weights for attributes and incompletely known weights for decision makers is used based on the hybrid score-accuracy functions under single valued neutrosophic environments. We use weight model for attributes based on the hybrid score-accuracy functions to derive the weights of decision makers and attributes from the decision matrices represented by the form of SVNNs to decrease the effect of some unreasonable evaluations. Moreover, we use the overall evaluation formulae of the weighted hybrid scoreaccuracy functions for each alternative to rank the alternatives and recruit the most desirable teachers. Finally, an educational problem for teacher selection is
Convergence Analysis of a Class of Computational Intelligence Approaches
Junfeng Chen
2013-01-01
Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.
Simplified tritium permeation model
Longhurst, G.R.
1993-01-01
In this model I seek to provide a simplified approach to solving permeation problems addressed by TMAP4. I will assume that there are m one-dimensional segments with thickness L i , i = 1, 2, hor-ellipsis, m, joined in series with an implantation flux, J i , implanting at the single depth, δ, in the first segment. From material properties and heat transfer considerations, I calculate temperatures at each face of each segment, and from those temperatures I find local diffusivities and solubilities. I assume recombination coefficients K r1 and K r2 are known at the upstream and downstream faces, respectively, but the model will generate Baskes recombination coefficient values on demand. Here I first develop the steady-state concentration equations and then show how trapping considerations can lead to good estimates of permeation transient times
An Integrated Computer-Aided Approach for Environmental Studies
Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia
1997-01-01
A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....
Cosmological helium production simplified
Bernstein, J.; Brown, L.S.; Feinberg, G.
1988-01-01
We present a simplified model of helium synthesis in the early universe. The purpose of the model is to explain clearly the physical ideas relevant to the cosmological helium synthesis, in a manner that does not overlay these ideas with complex computer calculations. The model closely follows the standard calculation, except that it neglects the small effect of Fermi-Dirac statistics for the leptons. We also neglect the temperature difference between photons and neutrinos during the period in which neutrons and protons interconvert. These approximations allow us to express the neutron-proton conversion rates in a closed form, which agrees to 10% accuracy or better with the exact rates. Using these analytic expressions for the rates, we reduce the calculation of the neutron-proton ratio as a function of temperature to a simple numerical integral. We also estimate the effect of neutron decay on the helium abundance. Our result for this quantity agrees well with precise computer calculations. We use our semi-analytic formulas to determine how the predicted helium abundance varies with such parameters as the neutron life-time, the baryon to photon ratio, the number of neutrino species, and a possible electron-neutrino chemical potential. 19 refs., 1 fig., 1 tab
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Digging deeper on "deep" learning: A computational ecology approach.
Buscema, Massimo; Sacco, Pier Luigi
2017-01-01
We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.
Computational experiment approach to advanced secondary mathematics curriculum
Abramovich, Sergei
2014-01-01
This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...
Spengler, C.
2012-01-01
The objective of this work is to provide adequate models in the code MEDICIS for the molten corium concrete interaction (MCCI) phase in a severe accident. Here, the multidimensional distribution of heat fluxes from the molten pool of corium to the sidewall and bottom wall concrete structures in the reactor pit and to the top surface is a persistent subject of international research activities on MCCI. In recent experi-ments with internally heated oxide melts it was observed that the erosion progress may be anisotropic - with an apparent preference of the sidewall compared to the bottom wall - or isotropic, in dependence of the type of concrete with which the cori-um interacts. The lumped parameter code MEDICIS, which is part of the severe accident codes ASTEC and COCOSYS - developed and used at IRSN/GRS respectively GRS for the latter one -, is dedicated to simulate the phenomenology during MCCI. In this work a simplified modelling in MEDICIS is tested to account for the observed ablation behaviour during MCCI, with focus on the heat transfer to the top surface under flooded conditions. This approach is assessed by calculations for selected MCCI experiments involving the top flooding of the melt. (orig.)
Computational biomechanics for medicine new approaches and new applications
Miller, Karol; Wittek, Adam; Nielsen, Poul
2015-01-01
The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.
Teruel, Jose R; Goa, Pål E; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F
2016-11-01
Purpose To evaluate the relative change of the apparent diffusion coefficient (ADC) at low- and medium-b-value regimens as a surrogate marker of microcirculation, to study its correlation with dynamic contrast agent-enhanced (DCE) magnetic resonance (MR) imaging-derived parameters, and to assess its potential for differentiation between malignant and benign breast tumors. Materials and Methods Ethics approval and informed consent were obtained. From May 2013 to June 2015, 61 patients diagnosed with either malignant or benign breast tumors were prospectively recruited. All patients were scanned with a 3-T MR imager, including diffusion-weighted imaging (DWI) and DCE MR imaging. Parametric analysis of DWI and DCE MR imaging was performed, including a proposed marker, relative enhanced diffusivity (RED). Spearman correlation was calculated between DCE MR imaging and DWI parameters, and the potential of the different DWI-derived parameters for differentiation between malignant and benign breast tumors was analyzed by dividing the sample into equally sized training and test sets. Optimal cut-off values were determined with receiver operating characteristic curve analysis in the training set, which were then used to evaluate the independent test set. Results RED had a Spearman rank correlation of 0.61 with the initial area under the curve calculated from DCE MR imaging. Furthermore, RED differentiated cancers from benign tumors with an overall accuracy of 90% (27 of 30) on the test set with 88.2% (15 of 17) sensitivity and 92.3% (12 of 13) specificity. Conclusion This study presents promising results introducing a simplified approach to assess results from a DWI protocol sensitive to the intravoxel incoherent motion effect by using only three b values. This approach could potentially aid in the differentiation, characterization, and monitoring of breast pathologies. © RSNA, 2016 Online supplemental material is available for this article.
Le Bihan, Guillaume; Payrastre, Olivier; Gaume, Eric; Moncoulon, David; Pons, Frédéric
2017-11-01
Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall-runoff models, generally aimed at estimating flood magnitudes - typically discharges or return periods - at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies). The proposed approach includes, in addition to a distributed rainfall-runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.
G. Le Bihan
2017-11-01
Full Text Available Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall–runoff models, generally aimed at estimating flood magnitudes – typically discharges or return periods – at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies. The proposed approach includes, in addition to a distributed rainfall–runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.
Computer based approach to fatigue analysis and design
Comstock, T.R.; Bernard, T.; Nieb, J.
1979-01-01
An approach is presented which uses a mini-computer based system for data acquisition, analysis and graphic displays relative to fatigue life estimation and design. Procedures are developed for identifying an eliminating damaging events due to overall duty cycle, forced vibration and structural dynamic characteristics. Two case histories, weld failures in heavy vehicles and low cycle fan blade failures, are discussed to illustrate the overall approach. (orig.) 891 RW/orig. 892 RKD [de
Márquez, Andrés; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Álvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2018-03-01
Simplified analytical models with predictive capability enable simpler and faster optimization of the performance in applications of complex photonic devices. We recently demonstrated the most simplified analytical model still showing predictive capability for parallel-aligned liquid crystal on silicon (PA-LCoS) devices, which provides the voltage-dependent retardance for a very wide range of incidence angles and any wavelength in the visible. We further show that the proposed model is not only phenomenological but also physically meaningful, since two of its parameters provide the correct values for important internal properties of these devices related to the birefringence, cell gap, and director profile. Therefore, the proposed model can be used as a means to inspect internal physical properties of the cell. As an innovation, we also show the applicability of the split-field finite-difference time-domain (SF-FDTD) technique for phase-shift and retardance evaluation of PA-LCoS devices under oblique incidence. As a simplified model for PA-LCoS devices, we also consider the exact description of homogeneous birefringent slabs. However, we show that, despite its higher degree of simplification, the proposed model is more robust, providing unambiguous and physically meaningful solutions when fitting its parameters.
Computer and Internet Addiction: Analysis and Classification of Approaches
Zaretskaya O.V.
2017-08-01
Full Text Available The theoretical analysis of modern research works on the problem of computer and Internet addiction is carried out. The main features of different approaches are outlined. The attempt is made to systematize researches conducted and to classify scientific approaches to the problem of Internet addiction. The author distinguishes nosological, cognitive-behavioral, socio-psychological and dialectical approaches. She justifies the need to use an approach that corresponds to the essence, goals and tasks of social psychology in the field of research as the problem of Internet addiction, and the dependent behavior in general. In the opinion of the author, this dialectical approach integrates the experience of research within the framework of the socio-psychological approach and focuses on the observed inconsistencies in the phenomenon of Internet addiction – the compensatory nature of Internet activity, when people who are interested in the Internet are in a dysfunctional life situation.
Pedagogical Approaches to Teaching with Computer Simulations in Science Education
Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael
2013-01-01
For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is
Soltani, Sara
investigate the sensitivity and robustness of the reconstruction to variations of the scale and orientation in the training images and we suggest algorithms to estimate the correct relative scale and orientation of the unknown image to the training images from the data....... formulation in [22] enforces that the solution is an exact representation by the dictionary; in this report, we investigate this requirement. Furthermore, the underlying assumption that the scale and orientation of the training images are consistent with the unknown image of interest may not be realistic. We...
Cloud Computing - A Unified Approach for Surveillance Issues
Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.
2017-08-01
Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.
Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach
Grover Kearns
2010-06-01
Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.
Cloud computing approaches to accelerate drug discovery value chain.
Garg, Vibhav; Arora, Suchir; Gupta, Chitra
2011-12-01
Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.
A computational approach to chemical etiologies of diabetes
Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe
2013-01-01
Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic...... linkages in type 2 diabetes (T2D) through genome-wide associations, disease similarities, and published empirical evidence. Ten environmental chemicals were found to be potentially linked to T2D, the highest scores were observed for arsenic, 2,3,7,8-tetrachlorodibenzo-p-dioxin, hexachlorobenzene...
Cucco, Andrea; Umgiesser, Georg
2015-09-15
In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simplified Stability Criteria for Delayed Neutral Systems
Xinghua Zhang
2014-01-01
Full Text Available For a class of linear time-invariant neutral systems with neutral and discrete constant delays, several existing asymptotic stability criteria in the form of linear matrix inequalities (LMIs are simplified by using matrix analysis techniques. Compared with the original stability criteria, the simplified ones include fewer LMI variables, which can obviously reduce computational complexity. Simultaneously, it is theoretically shown that the simplified stability criteria and original ones are equivalent; that is, they have the same conservativeness. Finally, a numerical example is employed to verify the theoretic results investigated in this paper.
Haverkamp, B.; Krone, J.; Shybetskyi, I.
2013-01-01
The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
WSRC approach to validation of criticality safety computer codes
Finch, D.R.; Mincey, J.F.
1991-01-01
Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed
Archiving Software Systems: Approaches to Preserve Computational Capabilities
King, T. A.
2014-12-01
A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.
A SURVEY ON DOCUMENT CLUSTERING APPROACH FOR COMPUTER FORENSIC ANALYSIS
Monika Raghuvanshi*, Rahul Patel
2016-01-01
In a forensic analysis, large numbers of files are examined. Much of the information comprises of in unstructured format, so it’s quite difficult task for computer forensic to perform such analysis. That’s why to do the forensic analysis of document within a limited period of time require a special approach such as document clustering. This paper review different document clustering algorithms methodologies for example K-mean, K-medoid, single link, complete link, average link in accorandance...
Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding
2017-04-01
Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed
Brennan, Barbara J; Lemenuel-Diot, Annabelle; Snoeck, Eric; McKenna, Michael; Solsky, Jonathan; Wat, Cynthia; Mallalieu, Navita L
2016-04-01
The aim of the study was to simplify the dosing regimen of peginterferon alfa-2a in paediatric patients with chronic hepatitis C. A population pharmacokinetic (PK) model was developed using PK data from 14 children aged 2-8 years and 402 adults. Simulations were produced to identify a simplified dosing regimen that would provide exposures similar to those observed in the paediatric clinical trials and in the range known to be safe/efficacious in adults. Model predictions were evaluated against observed adult and paediatric data to reinforce confidence of the proposed dosing regimen. The final model was a two compartment model with a zero order resorption process. Covariates included a linear influence of body surface area (BSA) on apparent oral clearance (CL/F) and a linear influence of body weight on apparent volume of distribution of the central compartment (V1 /F). A simplified dosing regimen was developed which is expected to provide exposures in children aged ≥5 years similar to the dosing formula used in the paediatric clinical trial and within the range that is safe/efficacious in adults. This simplified regimen is approved in the EU and in other countries for the treatment of chronic hepatitis C in treatment-naive children/adolescents aged ≥5 years in combination with ribavirin. Pre-existing adult PK data were combined with relatively limited paediatric PK data to develop a PK model able to predict exposure in both populations adequately. This provided increased confidence in characterizing PK in children and helped in the development of a simplified dosing regimen of peginterferon alfa-2a in paediatric patients. © 2015 The British Pharmacological Society.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models
Cai, Caifang
2013-01-01
Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Computational Approaches for Integrative Analysis of the Metabolome and Microbiome
Jasmine Chong
2017-11-01
Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.
Costa, E.B. da
1992-09-01
The present work selected the available bibliography equations and empirical relationships to the development of a computer code to obtain the turbulent velocity and temperature profiles in liquid metal tube flow with heat generation. The computer code is applied to a standard problem and the results are considered satisfactory, at least from the viewpoint of qualitative behaviour. (author). 50 refs, 21 figs, 3 tabs.
Giovanella, L.; De Palma, D.; Ceriani, L.; Garancini, S. [Azienda Ospedaliera Universitaria, Ospedale di Circolo e Fondazione Macchi, Dipt. di Diagnostica per Immagini e Radioterapia, Unita' Operativa di Medicina Nucleare, Varese (Italy); Vanoli, P.; Tordiglione, M. [Azienda Ospedaliera Universitaria, Ospedale di Circolo e Fondazione Macchi, Unita' Operativa di Radioterapia, Varese (Italy); Tarolo, G. L. [Milan Univ., Milan (Italy). Cattedra di Medicina Nucleare, Ist. di Scienze Radiologiche
2000-12-01
In this article is evaluated the clinical and effectiveness of a simplified dosimetric approach to the iodine-131 treatment of hyperthyroidism due to Graves' disease or uninodular and multinodular toxic goiter. 189 patients with biochemically confirmed hyperthyroidism and performed thyroid ultrasonography and scintigraphy obtaining the diagnosis of Graves' disease in 43 patients, uninodular toxic goiter in 57 patients and multinodular toxic goiter in 89 patients were enrolled in order to be examined. It was found in 28 patients cold thyroid nodules and performed fine-needle aspiration with negative cytology for thyroid malignancy in all cases. Antithyroid drugs were stopped 5 days till radioiodine administration and, if necessary, restored 15 days after the treatment. Radioiodine uptake test was performed in all patients and therapeutic activity calculated to obtain a minimal activity of 185 MBq in the thyroid 24 hours after administration. The minimal activity was adjusted based on clinical, biochemical and imaging data to obtain a maximal activity of 370 MBq after 24 hours. Biochemical and clinical tests were scheduled at 3 and 12 months posttreatment and thyroxine treatment was started when hypothyroidism occurred. In Graves' disease patients a mean activity of 370 MBq (distribution 259-555 MBq) was administered. Three months after treatment and at least 15 days after methimazole discontinuation 32 of 43 (74%) patients were hypothyroid , 5 of 43 (11%) euthyroid and 6 of 43 (15%) hyperthyroid. Three of the latter were immediately submitted to a new radioiodine administration while 32 hypothyroid patients received thyroxine treatment. One year after the radioiodine treatment no patient had hyperthyroidism; 38 of 43 (89%) were on a replacement treatment while 5 (11%) remained euthyroid. In uni-and multinodular toxic goiter a mean activity of 444 MBq (distribution 259-555 MBq) was administered. Three months posttreatment 134 of 146 (92%) patients were
A Computer Vision Approach to Identify Einstein Rings and Arcs
Lee, Chien-Hsiu
2017-03-01
Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.
SPINET: A Parallel Computing Approach to Spine Simulations
Peter G. Kropf
1996-01-01
Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.
Computer-oriented approach to fault-tree construction
Salem, S.L.; Apostolakis, G.E.; Okrent, D.
1976-11-01
A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the Computer Automated Tree (CAT) program, to several systems. A means of representing component behavior by decision tables is presented. The method developed allows the modeling of components with various combinations of electrical, fluid and mechanical inputs and outputs. Each component can have multiple internal failure mechanisms which combine with the states of the inputs to produce the appropriate output states. The generality of this approach allows not only the modeling of hardware, but human actions and interactions as well. A procedure for constructing and editing fault trees, either manually or by computer, is described. The techniques employed result in a complete fault tree, in standard form, suitable for analysis by current computer codes. Methods of describing the system, defining boundary conditions and specifying complex TOP events are developed in order to set up the initial configuration for which the fault tree is to be constructed. The approach used allows rapid modifications of the decision tables and systems to facilitate the analysis and comparison of various refinements and changes in the system configuration and component modeling
Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches
Perrin H. Beatty
2016-10-01
Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.
A comparative approach to closed-loop computation.
Roth, E; Sponberg, S; Cowan, N J
2014-04-01
Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computational approaches in the design of synthetic receptors - A review.
Cowen, Todd; Karim, Kal; Piletsky, Sergey
2016-09-14
The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology. Copyright © 2016 Elsevier B.V. All rights reserved.
Analytical and computational approaches to define the Aspergillus niger secretome
Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.
2009-03-01
We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.
Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches
Dongsheng Che
2014-01-01
Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.
Fast reactor safety and computational thermo-fluid dynamics approaches
Ninokata, Hisashi; Shimizu, Takeshi
1993-01-01
This article provides a brief description of the safety principle on which liquid metal cooled fast breeder reactors (LMFBRs) is based and the roles of computations in the safety practices. A number of thermohydraulics models have been developed to date that successfully describe several of the important types of fluids and materials motion encountered in the analysis of postulated accidents in LMFBRs. Most of these models use a mixture of implicit and explicit numerical solution techniques in solving a set of conservation equations formulated in Eulerian coordinates, with special techniques included to specific situations. Typical computational thermo-fluid dynamics approaches are discussed in particular areas of analyses of the physical phenomena relevant to the fuel subassembly thermohydraulics design and that involve describing the motion of molten materials in the core over a large scale. (orig.)
Benchmarking of computer codes and approaches for modeling exposure scenarios
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided
Non-intrusive speech quality assessment in simplified e-model
Vozňák, Miroslav
2012-01-01
The E-model brings a modern approach to the computation of estimated quality, allowing for easy implementation. One of its advantages is that it can be applied in real time. The method is based on a mathematical computation model evaluating transmission path impairments influencing speech signal, especially delays and packet losses. These parameters, common in an IP network, can affect speech quality dramatically. The paper deals with a proposal for a simplified E-model and its pr...
Approaching multiphase flows from the perspective of computational fluid dynamics
Banas, A.O.
1992-01-01
Thermalhydraulic simulation methodologies based on subchannel and porous-medium concepts are briefly reviewed and contrasted with the general approach of Computational Fluid Dynamics (CFD). An outline of the advanced CFD methods for single-phase turbulent flows is followed by a short discussion of the unified formulation of averaged equations for turbulent and multiphase flows. Some of the recent applications of CFD at Chalk River Laboratories are discussed, and the complementary role of CFD with regard to the established thermalhydraulic methods of analysis is indicated. (author). 8 refs
Chaney, Joel; Liu Hao; Li Jinxing
2012-01-01
Highlights: ► Overview of the overall approach of modelling fixed-bed biomass boilers in CFD. ► Bed sub-models of moisture evaporation, devolatisation and char combustion reviewed. ► A method of embedding a combustion model in discrete fuel zones within the CFD is suggested. ► Includes sample of preliminary results for a 50 kW pellet boiler. ► Clear physical trends predicted. - Abstract: The increasing global energy demand and mounting pressures for CO 2 mitigation call for increased efficient utilization of biomass, particularly for heating domestic and commercial buildings. The authors of the present paper are investigating the optimization of the combustion performance and NO x emissions of a 50 kW biomass pellet boiler fabricated by a UK manufacturer. The boiler has a number of adjustable parameters including the ratio of air flow split between the primary and secondary supplies, the orientation, height, direction and number of the secondary inlets. The optimization of these parameters provides opportunities to improve both the combustion efficiency and NO x emissions. When used carefully in conjunction with experiments, Computational Fluid Dynamics (CFD) modelling is a useful tool for rapidly and at minimum cost examining the combustion performance and emissions from a boiler with multiple variable parameters. However, modelling combustion and emissions of a small-scale biomass pellet boiler is not trivial and appropriate fixed-bed models that can be coupled with the CFD code are required. This paper reviews previous approaches specifically relevant to simulating fixed-bed biomass boilers. In the first part it considers approaches to modelling the heterogeneous solid phase and coupling this with the gas phase. The essential components of the sub-models are then overviewed. Importantly, for the optimization process a model is required that has a good balance between accuracy in predicting physical trends, with low computational run time. Finally, a
Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations
Khan, Junaid Ali; Raja, Muhammad Asif Zahoor; Qureshi, Ijaz Mansoor
2011-01-01
We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed. (general)
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis
Dilip Swaminathan
2009-01-01
kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.
Morgante, Enrico
2018-01-01
I review the construction of Simplified Models for Dark Matter searches. After discussing the philosophy and some simple examples, I turn the attention to the aspect of the theoretical consistency and to the implications of the necessary extensions of these models.
Error characterization for asynchronous computations: Proxy equation approach
Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath
2017-11-01
Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.
Crowd Computing as a Cooperation Problem: An Evolutionary Approach
Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel
2013-05-01
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.
Novel computational approaches for the analysis of cosmic magnetic fields
Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)
2016-07-01
In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.
Gilmar E. Cerquetani
2006-08-01
Full Text Available Os objetivos do presente trabalho foram desenvolver rotina computacional para a solução da equação de Yalin e do diagrama de Shields e avaliar uma equação simplificada para modelar a capacidade de transporte de sedimento num Latossolo Vermelho Distrófico que possa ser utilizada no Water Erosion Prediction Project - WEPP, assim como em outros modelos de predição da erosão do solo. A capacidade de transporte de sedimento para o fluxo superficial foi representada como função-potência da tensão cisalhante, a qual revelou ser aproximação da equação de Yalin. Essa equação simplificada pôde ser aplicada em resultados experimentais oriundos de topografia complexa. A equação simplificada demonstrou acuracidade em relação à equação de Yalin, quando calibrada utilizando-se da tensão média cisalhante. Testes de validação com dados independentes demonstraram que a equação simplificada foi eficiente para estimar a capacidade de transporte de sedimento.The objectives of the present work were to develop a computational routine to solve Yalin equation and Shield diagram and to evaluate a simplified equation for modeling sediment transport capacity in a Dystrophic Hapludox that could be used in the Water Erosion Prediction Project - WEPP, as well as other soil erosion models. Sediment transport capacity for shallow overland flow was represented as a power function of the hydraulic shear stress and which showed to be an approximation to the Yalin equation for sediment transport capacity. The simplified equation for sediment transport could be applied to experimental data from a complex topography. The simplified equation accurately approximated the Yalin equation when calibrated using the mean hydraulic shear stress. Validation tests using independent data showed that the simplified equation had a good performance in predicting sediment transport capacity.
High-performance computational fluid dynamics: a custom-code approach
Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain
2016-01-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)
High-performance computational fluid dynamics: a custom-code approach
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Saniz, R.; Xu, Y.; Matsubara, M.; Amini, M. N.; Dixit, H.; Lamoen, D.; Partoens, B.
2013-01-01
The calculation of defect levels in semiconductors within a density functional theory approach suffers greatly from the band gap problem. We propose a band gap correction scheme that is based on the separation of energy differences in electron addition and relaxation energies. We show that it can predict defect levels with a reasonable accuracy, particularly in the case of defects with conduction band character, and yet is simple and computationally economical. We apply this method to ZnO doped with group III elements (Al, Ga, In). As expected from experiment, the results indicate that Zn substitutional doping is preferred over interstitial doping in Al, Ga, and In-doped ZnO, under both zinc-rich and oxygen-rich conditions. Further, all three dopants act as shallow donors, with the +1 charge state having the most advantageous formation energy. Also, doping effects on the electronic structure of ZnO are sufficiently mild so as to affect little the fundamental band gap and lowest conduction bands dispersion, which secures their n-type transparent conducting behavior. A comparison with the extrapolation method based on LDA+U calculations and with the Heyd-Scuseria-Ernzerhof hybrid functional (HSE) shows the reliability of the proposed scheme in predicting the thermodynamic transition levels in shallow donor systems.
Suggested Approaches to the Measurement of Computer Anxiety.
Toris, Carol
Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…
Sorensen, H.; Nordskov, A.; Sass, B.; Visler, T.
1987-01-01
A simplified version of a deuterium pellet gun based on the pipe gun principle is described. The pipe gun is made from a continuous tube of stainless steel and gas is fed in from the muzzle end only. It is indicated that the pellet length is determined by the temperature gradient along the barrel right outside the freezing cell. Velocities of around 1000 m/s with a scatter of +- 2% are obtained with a propellant gas pressure of 40 bar
UTILITY OF SIMPLIFIED LABANOTATION
Maria del Pilar Naranjo
2016-02-01
Full Text Available After using simplified Labanotation as a didactic tool for some years, the author can conclude that it accomplishes at least three main functions: efficiency of rehearsing time, social recognition and broadening of the choreographic consciousness of the dancer. The doubts of the dancing community about the issue of ‘to write or not to write’ are highly determined by the contexts and their own choreographic evolution, but the utility of Labanotation, as a tool for knowledge, is undeniable.
Computational Diagnostic: A Novel Approach to View Medical Data.
Mane, K. K. (Ketan Kirtiraj); Börner, K. (Katy)
2007-01-01
A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious
Solvent effect on indocyanine dyes: A computational approach
Bertolino, Chiara A.; Ferrari, Anna M.; Barolo, Claudia; Viscardi, Guido; Caputo, Giuseppe; Coluccia, Salvatore
2006-01-01
The solvatochromic behaviour of a series of indocyanine dyes (Dyes I-VIII) was investigated by quantum chemical calculations. The effect of the polymethine chain length and of the indolenine structure has been satisfactorily reproduced by semiempirical Pariser-Parr-Pople (PPP) calculations. The solvatochromism of 3,3,3',3'-tetramethyl-N,N'-diethylindocarbocyanine iodide (Dye I) has been deeply investigated within the ab initio time-dependent density functional theory (TD-DFT) approach. Dye I undergoes non-polar solvation and a linear correlation has been individuated between absorption shifts and refractive index. Computed absorption λ max and oscillator strengths obtained by TD-DFT are in good agreement with the experimental data
Systems approaches to computational modeling of the oral microbiome
Dimiter V. Dimitrov
2013-07-01
Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders
A computational approach to mechanistic and predictive toxicology of pesticides
Kongsbak, Kristine Grønning; Vinggaard, Anne Marie; Hadrup, Niels
2014-01-01
Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes...... of action in humans, to group them according to their modes of action, and to hypothesize on their potential effects on human health. We extracted human proteins associated to prochloraz, tebuconazole, epoxiconazole, procymidone, and mancozeb and enriched each protein set by using a high confidence human......, and procymidone exerted their effects mainly via interference with steroidogenesis and nuclear receptors. Prochloraz was associated to a large number of human diseases, and together with tebuconazole showed several significant associations to Testicular Dysgenesis Syndrome. Mancozeb showed a differential mode...
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Organic Computing Approach to Self-organising Robot Ensembles
Sebastian Albrecht von Mammen
2016-11-01
Full Text Available Similar to the Autonomous Computing initiative, that has mainly been advancing techniques for self-optimisation focussing on computing systems and infrastructures, Organic Computing (OC has been driving the development of system design concepts and algorithms for self-adaptive systems at large. Examples of application domains include, for instance, traffic management and control, cloud services, communication protocols, and robotic systems. Such an OC system typically consists of a potentially large set of autonomous and self-managed entities, where each entity acts with a local decision horizon. By means of cooperation of the individual entities, the behaviour of the entire ensemble system is derived. In this article, we present our work on how autonomous, adaptive robot ensembles can benefit from OC technology. Our elaborations are aligned with the different layers of an observer/controller framework which provides the foundation for the individuals' adaptivity at system design-level. Relying on an extended Learning Classifier System (XCS in combination with adequate simulation techniques, this basic system design empowers robot individuals to improve their individual and collaborative performances, e.g. by means of adapting to changing goals and conditions.Not only for the sake of generalisability, but also because of its enormous transformative potential, we stage our research in the domain of robot ensembles that are typically comprised of several quad-rotors and that organise themselves to fulfil spatial tasks such as maintenance of building facades or the collaborative search for mobile targets. Our elaborations detail the architectural concept, provide examples of individual self-optimisation as well as of the optimisation of collaborative efforts, and we show how the user can control the ensembles at multiple levels of abstraction. We conclude with a summary of our approach and an outlook on possible future steps.
A computational approach to climate science education with CLIMLAB
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
Towards scalable quantum communication and computation: Novel approaches and realizations
Jiang, Liang
Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as
A computational approach to finding novel targets for existing drugs.
Yvonne Y Li
2011-09-01
Full Text Available Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM, suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects.
Computer-Aided Approaches for Targeting HIVgp41
William J. Allen
2012-08-01
Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.
Computed tomography of the lung. A pattern approach. 2. ed.
Verschakelen, Johny A.; Wever, Walter de
2018-01-01
Computed Tomography of the Lung: A Pattern Approach aims to enable the reader to recognize and understand the CT signs of lung diseases and diseases with pulmonary involvement as a sound basis for diagnosis. After an introductory chapter, basic anatomy and its relevance to the interpretation of CT appearances is discussed. Advice is then provided on how to approach a CT scan of the lungs, and the different distribution and appearance patterns of disease are described. Subsequent chapters focus on the nature of these patterns, identify which diseases give rise to them, and explain how to differentiate between the diseases. The concluding chapter presents a large number of typical and less typical cases that will help the reader to practice application of the knowledge gained from the earlier chapters. Since the first edition, the book has been adapted and updated, with the inclusion of many new figures and case studies. It will be an invaluable asset both for radiologists and pulmonologists in training and for more experienced specialists wishing to update their knowledge.
Optical computing - an alternate approach to trigger processing
Cleland, W.E.
1981-01-01
The enormous rate reduction factors required by most ISABELLE experiments suggest that we should examine every conceivable approach to trigger processing. One approach that has not received much attention by high energy physicists is optical data processing. The past few years have seen rapid advances in optoelectronic technology, stimulated mainly by the military and the communications industry. An intriguing question is whether one can utilize this technology together with the optical computing techniques that have been developed over the past two decades to develop a rapid trigger processor for high energy physics experiments. Optical data processing is a method for performing a few very specialized operations on data which is inherently two dimensional. Typical operations are the formation of convolution or correlation integrals between the input data and information stored in the processor in the form of an optical filter. Optical processors are classed as coherent or incoherent, according to the spatial coherence of the input wavefront. Typically, in a coherent processor a laser beam is modulated with a photographic transparency which represents the input data. In an incoherent processor, the input may be an incoherently illuminated transparency, but self-luminous objects, such as an oscilloscope trace, have also been used. We consider here an incoherent processor in which the input data is converted into an optical wavefront through the excitation of an array of point sources - either light emitting diodes or injection lasers
COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING
Afonnikov D.
2012-08-01
Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.
An evolutionary computation approach to examine functional brain plasticity
Arnab eRoy
2016-04-01
Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in
Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin
2016-07-01
Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.
A Representation-Theoretic Approach to Reversible Computation with Applications
Maniotis, Andreas Milton
Reversible computing is a sub-discipline of computer science that helps to understand the foundations of the interplay between physics, algebra, and logic in the context of computation. Its subjects of study are computational devices and abstract models of computation that satisfy the constraint ......, there is still no uniform and consistent theory that is general in the sense of giving a model-independent account to the field....... of information conservation. Such machine models, which are known as reversible models of computation, have been examined both from a theoretical perspective and from an engineering perspective. While a bundle of many isolated successful findings and applications concerning reversible computing exists...
McFedries, Paul
2015-01-01
Learn Windows 10 quickly and painlessly with this beginner's guide Windows 10 Simplified is your absolute beginner's guide to the ins and outs of Windows. Fully updated to cover Windows 10, this highly visual guide covers all the new features in addition to the basics, giving you a one-stop resource for complete Windows 10 mastery. Every page features step-by-step screen shots and plain-English instructions that walk you through everything you need to know, no matter how new you are to Windows. You'll master the basics as you learn how to navigate the user interface, work with files, create
Simplifying massive planar subdivisions
Arge, Lars; Truelsen, Jakob; Yang, Jungwoo
2014-01-01
We present the first I/O- and practically-efficient algorithm for simplifying a planar subdivision, such that no point is moved more than a given distance εxy and such that neighbor relations between faces (homotopy) are preserved. Under some practically realistic assumptions, our algorithm uses ....... For example, for the contour map simplification problem it is significantly faster than the previous algorithm, while obtaining approximately the same simplification factor. Read More: http://epubs.siam.org/doi/abs/10.1137/1.9781611973198.3...
Wooldridge, Mike
2011-01-01
The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho
Simplifying the Development, Use and Sustainability of HPC Software
Jeremy Cohen
2014-07-01
Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.
Simplified elastoplastic fatigue analysis
Autrusson, B.; Acker, D.; Hoffmann, A.
1987-01-01
Oligocyclic fatigue behaviour is a function of the local strain range. The design codes ASME section III, RCC-M, Code Case N47, RCC-MR, and the Guide issued by PNC propose simplified methods to evaluate the local strain range. After having briefly described these simplified methods, we tested them by comparing the results of experimental strains with those predicted by these rules. The experiments conducted for this study involved perforated plates under tensile stress, notched or reinforced beams under four-point bending stress, grooved specimens under tensile-compressive stress, and embedded grooved beams under bending stress. They display a relative conservatism depending on each case. The evaluation of the strains of rather inaccurate and sometimes lacks conservatism. So far, the proposal is to use the finite element codes with a simple model. The isotropic model with the cyclic consolidation curve offers a good representation of the real equivalent strain. There is obviously no question of representing the cycles and the entire loading history, but merely of calculating the maximum variation in elastoplastic equivalent deformations with a constant-rate loading. The results presented testify to the good prediction of the strains with this model. The maximum equivalent strain will be employed to evaluate fatigue damage
A computationally efficient approach for template matching-based ...
In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.
Computational approach for a pair of bubble coalescence process
Nurul Hasan; Zalinawati binti Zakaria
2011-01-01
The coalescence of bubbles has great value in mineral recovery and oil industry. In this paper, two co-axial bubbles rising in a cylinder is modelled to study the coalescence of bubbles for four computational experimental test cases. The Reynolds' (Re) number is chosen in between 8.50 and 10, Bond number, Bo ∼4.25-50, Morton number, M 0.0125-14.7. The viscosity ratio (μ r ) and density ratio (ρ r ) of liquid to bubble are kept constant (100 and 850 respectively). It was found that the Bo number has significant effect on the coalescence process for constant Re, μ r and ρ r . The bubble-bubble distance over time was validated against published experimental data. The results show that VOF approach can be used to model these phenomena accurately. The surface tension was changed to alter the Bo and density of the fluids to alter the Re and M, keeping the μ r and ρ r the same. It was found that for lower Bo, the bubble coalesce is slower and the pocket at the lower part of the leading bubble is less concave (towards downward) which is supported by the experimental data.
An Integrated Soft Computing Approach to Hughes Syndrome Risk Assessment.
Vilhena, João; Rosário Martins, M; Vicente, Henrique; Grañeda, José M; Caldeira, Filomena; Gusmão, Rodrigo; Neves, João; Neves, José
2017-03-01
The AntiPhospholipid Syndrome (APS) is an acquired autoimmune disorder induced by high levels of antiphospholipid antibodies that cause arterial and veins thrombosis, as well as pregnancy-related complications and morbidity, as clinical manifestations. This autoimmune hypercoagulable state, usually known as Hughes syndrome, has severe consequences for the patients, being one of the main causes of thrombotic disorders and death. Therefore, it is required to be preventive; being aware of how probable is to have that kind of syndrome. Despite the updated of antiphospholipid syndrome classification, the diagnosis remains difficult to establish. Additional research on clinically relevant antibodies and standardization of their quantification are required in order to improve the antiphospholipid syndrome risk assessment. Thus, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a computational framework based on Artificial Neural Networks. The proposed model allows for improving the diagnosis, classifying properly the patients that really presented this pathology (sensitivity higher than 85%), as well as classifying the absence of APS (specificity close to 95%).
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection
Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.
2016-05-25
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.
An Educational Approach to Computationally Modeling Dynamical Systems
Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl
2009-01-01
Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…
Teaching Pervasive Computing to CS Freshmen: A Multidisciplinary Approach
Silvis-Cividjian, Natalia
2015-01-01
Pervasive Computing is a growing area in research and commercial reality. Despite this extensive growth, there is no clear consensus on how and when to teach it to students. We report on an innovative attempt to teach this subject to first year Computer Science students. Our course combines computer
Alexander Vaninsky
2013-07-01
Full Text Available This paper introduces a simplified version of Data Envelopment Analysis - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. Examples of DEA applications discussed in this paper include the London 2012 Olympic Games and the dynamics of the United States’ environmental performance. In the first example, we find a team winner and rank the teams; in the second, we analyze the dynamics of CO2 emissions adjusted to the gross domestic product, population, and energy consumption. Adding a virtual Perfect Object – one having the greatest outputs and smallest inputs - we greatly simplify the DEA computational procedure by eliminating the Linear Programming algorithm. Simplicity of computations makes the suggested approach attractive for educational purposes, in particular, for use in Quantitative Reasoning courses.
Human Computation An Integrated Approach to Learning from the Crowd
Law, Edith
2011-01-01
Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy
Simplified proceeding as a civil procedure model
Олексій Юрійович Зуб
2016-01-01
shall mean a specific, additional form of consideration and solution of civil cases that is based on the voluntary approach to its use, characterized by the reduced set of procedural rules and ends with rendering a peculiar judicial decision. Moreover, the most common features of summary proceedings are highlighted. Simplified proceedings as a specific form of consideration of dispute regarding civil law and as a special way to optimize legal proceedings is provided with a set of peculiar features that distinguish them among the other proceedings. Therewith, the analyzed features shall be defined as basic, in other words, such features peculiar to the certain kind of proceedings during its development and direct application to the civil procedural law.
A new approach in development of data flow control and investigation system for computer networks
Frolov, I.; Vaguine, A.; Silin, A.
1992-01-01
This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)
Simplifying EU environmental legislation
Anker, Helle Tegner
2014-01-01
The recent review of the EIA Directive was launched as part of the ‘better regulation’ agenda with the purpose to simplify procedures and reduce administrative burdens. This was combined with an attempt to further harmonise procedures in order address shortcomings in the Directive and to overcome...... for different interpretations on core issues. This is likely to result in diverging practices in the Member States as well as in further litigation on EIA matters. It is argued that at least from the outset the review of the EIA Directive missed out on a more thorough discussion of fundamental issues linked...... to the character and scope of EIA such as the important distinction between the procedural functions of information gathering and participation as opposed to the substantive outcomes in terms of reducing or avoiding adverse effects. A careful discussion of the basics of EIA might have provided a better option...
Mutations that Cause Human Disease: A Computational/Experimental Approach
Beernink, P; Barsky, D; Pesavento, B
2006-01-11
International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which
A computational intelligence approach to the Mars Precision Landing problem
Birge, Brian Kent, III
Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over
A Soft Computing Approach to Kidney Diseases Evaluation.
Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique
2015-10-01
Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the
Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach
Grover Kearns
2010-01-01
Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants). Accounting stu...
Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.
Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal
2016-12-01
In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for
Simplified Computer Interaction Using Mixed Reality
Balabanian, Jean-Paul
2004-01-01
This thesis describes a system for mixing reality, as captured by a camera, with a virtual 3-dimensional world. A system to recognize and track a square pattern of markers is created in order to obtain the extrinsic parameters of the camera. The parameters are used for rotating and translating the virtual world to align it with the pattern of markers in the image. The result is a system where a user can interact with a virtual world. A camera can be move freely around the pa...
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Gesture Recognition by Computer Vision : An Integral Approach
Lichtenauer, J.F.
2009-01-01
The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads
Thermodynamic and relative approach to compute glass-forming ...
models) characteristic: the isobaric heat capacity (Cp) of oxides, and execute a mathematical treatment of oxides thermodynamic data. We note this coefficient as thermodynamical relative glass-forming ability (ThRGFA) and for- mulate a model to compute it. Computed values of 2nd, 3rd, 4th and 5th period metal oxides ...
An approach to quantum-computational hydrologic inverse analysis.
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.
Reading Emotion From Mouse Cursor Motions: Affective Computing Approach.
Yamauchi, Takashi; Xiao, Kunchen
2018-04-01
Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16-26 trajectory features during a choice-reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10-fold cross-validation analysis shows that statistical models formed from "known" participants (training data) could predict nearly 10%-20% of the variance of positive affect and attentiveness ratings of "unknown" participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users. Copyright © 2017 Cognitive Science Society, Inc.
Communication: A simplified coupled-cluster Lagrangian for polarizable embedding.
Krause, Katharina; Klopper, Wim
2016-01-28
A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian.
Communication: A simplified coupled-cluster Lagrangian for polarizable embedding
Krause, Katharina; Klopper, Wim
2016-01-01
A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian
Witt, Hendrik
2007-01-01
The research presented in this thesis examines user interfaces for wearable computers.Wearable computers are a special kind of mobile computers that can be worn on the body. Furthermore, they integrate themselves even more seamlessly into different activities than a mobile phone or a personal digital assistant can.The thesis investigates the development and evaluation of user interfaces for wearable computers. In particular, it presents fundamental research results as well as supporting softw...
Okutan, Seda; Hansen, Harald S; Janfelt, Christian
2016-06-01
A method is presented for whole-body imaging of drugs and metabolites in mice with desorption electrospray ionization mass spectrometry imaging (DESI-MSI). Unlike most previous approaches to whole-body imaging which are based on cryo-sectioning using a cryo-macrotome, the presented approach is based on use of the cryo-microtome which is found in any histology lab. The tissue sections are collected on tape which is analyzed directly by DESI-MSI. The method is demonstrated on mice which have been dosed intraperitoneally with the antidepressive drug amitriptyline. By combining full-scan detection with the more selective and sensitive MS/MS detection, a number of endogenous compounds (lipids) were imaged simultaneously with the drug and one of its metabolites. The sensitivity of this approach allowed for imaging of drug and the metabolite in a mouse dosed with 2.7 mg amitriptyline per kg bodyweight which is comparable to the normal prescribed human dose. The simultaneous imaging of endogenous and exogenous compounds facilitates registration of the drug images to certain organs in the body by colored-overlay of the two types of images. The method represents a relatively low-cost approach to simple, sensitive and highly selective whole-body imaging in drug distribution and metabolism studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
What Computational Approaches Should be Taught for Physics?
Landau, Rubin
2005-03-01
The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.
Computer Tutors: An Innovative Approach to Computer Literacy. Part I: The Early Stages.
Targ, Joan
1981-01-01
In Part I of this two-part article, the author describes the evolution of the Computer Tutor project in Palo Alto, California, and the strategies she incorporated into a successful student-taught computer literacy program. Journal availability: Educational Computer, P.O. Box 535, Cupertino, CA 95015. (Editor/SJL)
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
A Human-Centred Tangible approach to learning Computational Thinking
Tommaso Turchi
2016-08-01
Full Text Available Computational Thinking has recently become a focus of many teaching and research domains; it encapsulates those thinking skills integral to solving complex problems using a computer, thus being widely applicable in our society. It is influencing research across many disciplines and also coming into the limelight of education, mostly thanks to public initiatives such as the Hour of Code. In this paper we present our arguments for promoting Computational Thinking in education through the Human-centred paradigm of Tangible End-User Development, namely by exploiting objects whose interactions with the physical environment are mapped to digital actions performed on the system.
An introduction to statistical computing a simulation-based approach
Voss, Jochen
2014-01-01
A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems. Sampling-based simulation techniques are now an invaluable tool for exploring statistical models. This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods. It also includes some advanced met
Towards an Approach of Semantic Access Control for Cloud Computing
Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai
With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.
2011-01-01
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity
Okutan, Seda; Hansen, Harald S; Janfelt, Christian
2016-01-01
A method is presented for whole-body imaging of drugs and metabolites in mice with desorption electrospray ionization mass spectrometry imaging (DESI-MSI). Unlike most previous approaches to whole-body imaging which are based on cryo-sectioning using a cryo-macrotome, the presented approach...... to simple, sensitive and highly selective whole-body imaging in drug distribution and metabolism studies....... is based on use of the cryo-microtome which is found in any histology lab. The tissue sections are collected on tape which is analyzed directly by DESI-MSI. The method is demonstrated on mice which have been dosed intraperitoneally with the antidepressive drug amitriptyline. By combining full...
Development of Computer Science Disciplines - A Social Network Analysis Approach
Pham, Manh Cuong; Klamma, Ralf; Jarke, Matthias
2011-01-01
In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and ...
New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy
Colvin, M; Krishnan, V V
2003-01-01
The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the spins with
Simplified methods to assess thermal fatigue due to turbulent mixing
Hannink, M.H.C.; Timperi, A.
2011-01-01
Thermal fatigue is a safety relevant damage mechanism in pipework of nuclear power plants. A well-known simplified method for the assessment of thermal fatigue due to turbulent mixing is the so-called sinusoidal method. Temperature fluctuations in the fluid are described by a sinusoidally varying signal at the inner wall of the pipe. Because of limited information on the thermal loading conditions, this approach generally leads to overconservative results. In this paper, a new assessment method is presented, which has the potential of reducing the overconservatism of existing procedures. Artificial fluid temperature signals are generated by superposition of harmonic components with different amplitudes and frequencies. The amplitude-frequency spectrum of the components is modelled by a formula obtained from turbulence theory, whereas the phase differences are assumed to be randomly distributed. Lifetime predictions generated with the new simplified method are compared with lifetime predictions based on real fluid temperature signals, measured in an experimental setup of a mixing tee. Also, preliminary steady-state Computational Fluid Dynamics (CFD) calculations of the total power of the fluctuations are presented. The total power is needed as an input parameter for the spectrum formula in a real-life application. Solution of the transport equation for the total power was included in a CFD code and comparisons with experiments were made. The newly developed simplified method for generating the temperature signal is shown to be adequate for the investigated geometry and flow conditions, and demonstrates possibilities of reducing the conservatism of the sinusoidal method. CFD calculations of the total power show promising results, but further work is needed to develop the approach. (author)
SIMPLIFIED MATHEMATICAL MODEL OF SMALL SIZED UNMANNED AIRCRAFT VEHICLE LAYOUT
2016-01-01
Full Text Available Strong reduction of new aircraft design period using new technology based on artificial intelligence is the key problem mentioned in forecasts of leading aerospace industry research centers. This article covers the approach to devel- opment of quick aerodynamic design methods based on artificial intelligence neural system. The problem is being solved for the classical scheme of small sized unmanned aircraft vehicle (UAV. The principal parts of the method are the mathe- matical model of layout, layout generator of this type of aircraft is built on aircraft neural networks, automatic selection module for cleaning variety of layouts generated in automatic mode, robust direct computational fluid dynamics method, aerodynamic characteristics approximators on artificial neural networks.Methods based on artificial neural networks have intermediate position between computational fluid dynamics methods or experiments and simplified engineering approaches. The use of ANN for estimating aerodynamic characteris-tics put limitations on input data. For this task the layout must be presented as a vector with dimension not exceeding sev-eral hundred. Vector components must include all main parameters conventionally used for layouts description and com- pletely replicate the most important aerodynamics and structural properties.The first stage of the work is presented in the paper. Simplified mathematical model of small sized UAV was developed. To estimate the range of geometrical parameters of layouts the review of existing vehicle was done. The result of the work is the algorithm and computer software for generating the layouts based on ANN technolo-gy. 10000 samples were generated and the dataset containig geometrical and aerodynamic characteristics of layoutwas created.
Simplified expressions of the T-matrix integrals for electromagnetic scattering.
Somerville, Walter R C; Auguié, Baptiste; Le Ru, Eric C
2011-09-01
The extended boundary condition method, also called the null-field method, provides a semianalytic solution to the problem of electromagnetic scattering by a particle by constructing a transition matrix (T-matrix) that links the scattered field to the incident field. This approach requires the computation of specific integrals over the particle surface, which are typically evaluated numerically. We introduce here a new set of simplified expressions for these integrals in the commonly studied case of axisymmetric particles. Simplifications are obtained using the differentiation properties of the radial functions (spherical Bessel) and angular functions (associated Legendre functions) and integrations by parts. The resulting simplified expressions not only lead to faster computations, but also reduce the risks of loss of precision and provide a simpler framework for further analytical work.
S. Sofana Reka
2016-09-01
Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.
Computer aided approach for qualitative risk assessment of engineered systems
Crowley, W.K.; Arendt, J.S.; Fussell, J.B.; Rooney, J.J.; Wagner, D.P.
1978-01-01
This paper outlines a computer aided methodology for determining the relative contributions of various subsystems and components to the total risk associated with an engineered system. Major contributors to overall task risk are identified through comparison of an expected frequency density function with an established risk criterion. Contributions that are inconsistently high are also identified. The results from this analysis are useful for directing efforts for improving system safety and performance. An analysis of uranium hexafluoride handling risk at a gaseous diffusion uranium enrichment plant using a preliminary version of the computer program EXCON is briefly described and illustrated
Environmental sciences and computations: a modular data based systems approach
Crawford, T.V.; Bailey, C.E.
1975-07-01
A major computer code for environmental calculations is under development at the Savannah River Laboratory. The primary aim is to develop a flexible, efficient capability to calculate, for all significant pathways, the dose to man resulting from releases of radionuclides from the Savannah River Plant and from other existing and potential radioactive sources in the southeastern United States. The environmental sciences programs at SRP are described, with emphasis on the development of the calculational system. It is being developed as a modular data-based system within the framework of the larger JOSHUA Computer System, which provides data management, terminal, and job execution facilities. (U.S.)
Computer assisted pyeloplasty in children the retroperitoneal approach
Olsen, L H; Jorgensen, T M
2004-01-01
PURPOSE: We describe the first series of computer assisted retroperitoneoscopic pyeloplasty in children using the Da Vinci Surgical System (Intuitive Surgical, Inc., Mountainview, California) with regard to setup, method, operation time, complications and preliminary outcome. The small space...... with the Da Vinci Surgical System. With the patient in a lateral semiprone position the retroperitoneal space was developed by blunt and balloon dissection. Three ports were placed for the computer assisted system and 1 for assistance. Pyeloplasty was performed with the mounted system placed behind...
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
Thermodynamic and relative approach to compute glass-forming
This study deals with the evaluation of glass-forming ability (GFA) of oxides and is a critical reading of Sun and Rawson thermodynamic approach to quantify this aptitude. Both approaches are adequate but ambiguous regarding the behaviour of some oxides (tendency to amorphization or crystallization). Indeed, ZrO2 and ...
Computer Adaptive Testing, Big Data and Algorithmic Approaches to Education
Thompson, Greg
2017-01-01
This article critically considers the promise of computer adaptive testing (CAT) and digital data to provide better and quicker data that will improve the quality, efficiency and effectiveness of schooling. In particular, it uses the case of the Australian NAPLAN test that will become an online, adaptive test from 2016. The article argues that…
A Cellular Automata Approach to Computer Vision and Image Processing.
1980-09-01
the ACM, vol. 15, no. 9, pp. 827-837. [ Duda and Hart] R. 0. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973...Center TR-738, 1979. [Farley] Arthur M. Farley and Andrzej Proskurowski, "Gossiping in Grid Graphs", University of Oregon Computer Science Department CS-TR
New approach for virtual machines consolidation in heterogeneous computing systems
Fesl, Jan; Cehák, J.; Doležalová, Marie; Janeček, J.
2016-01-01
Roč. 9, č. 12 (2016), s. 321-332 ISSN 1738-9968 Institutional support: RVO:60077344 Keywords : consolidation * virtual machine * distributed Subject RIV: JD - Computer Applications, Robotics http://www.sersc.org/journals/IJHIT/vol9_no12_2016/29.pdf
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Computational approaches to cognition: the bottom-up view.
Koch, C
1993-04-01
How can higher level aspects of cognition, such as figure-ground segregation, object recognition, selective focal attention and ultimately even awareness, be implemented at the level of synapses and neurons? A number of theoretical studies emerging out of the connectionist and the computational neuroscience communities are starting to address these issues using neural plausible models.
Linguistics, Computers, and the Language Teacher. A Communicative Approach.
Underwood, John H.
This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…
Integration of case study approach, project design and computer ...
pc
2018-03-05
Mar 5, 2018 ... computer modeling used as a research method applied in the process ... conclusions discuss the benefits for students who analyzed the ... accounting education process the case study method should not .... providing travel safety information to passengers ... from literature readings with practical problems.
R for cloud computing an approach for data scientists
Ohri, A
2014-01-01
R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era) and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud. With this information the reader can select both cloud vendors and the sometimes confusing cloud ecosystem as well as the R packages that can help process the analytical tasks with minimum effort and cost, and maximum usefulness and customization. The use of Graphical User Interfaces (GUI) and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on cloud computing, R, common tasks performed in analytics, scrutiny of big data analytics, and setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing ch...
A "Service-Learning Approach" to Teaching Computer Graphics
Hutzel, Karen
2007-01-01
The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…
Simplified Predictive Models for CO2 Sequestration Performance Assessment
Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis
2014-05-01
We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of
James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew
2013-10-01
The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-01-01
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
A Discrete Approach to Computer-Oriented Calculus.
Gordon, Sheldon P.
1979-01-01
Some of the implications and advantages of an instructional approach using results from the calculus of finite differences and finite sums, both for motivation and as tools leading to applications, are discussed. (MP)
Simplified Laboratory Runoff Procedure (SLRP): Procedure and Application
Price, Richard
2000-01-01
The Simplified Laboratory Runoff Procedure (SLRP) was developed to provide a faster, less expensive approach to evaluate surface runoff water quality from dredged material placed in an upland environment...
A simplified quantum gravitational model of inflation
Tsamis, N C; Woodard, R P
2009-01-01
Inflationary quantum gravity simplifies drastically in the leading logarithm approximation. We show that the only counterterm which contributes in this limit is the 1-loop renormalization of the cosmological constant. We go further to make a simplifying assumption about the operator dynamics at leading logarithm order. This assumption is explicitly implemented at 1- and 2-loop orders, and we describe how it can be implemented nonperturbatively. We also compute the expectation value of an invariant observable designed to quantify the quantum gravitational back-reaction on inflation. Although our dynamical assumption may not prove to be completely correct, it does have the right time dependence, it can naturally produce primordial perturbations of the right strength, and it illustrates how a rigorous application of the leading logarithm approximation might work in quantum gravity. It also serves as a partial test of the 'null hypothesis' that there are no significant effects from infrared gravitons.
TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH
Lee, Hsien-Hsin S
2010-05-11
The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.
Software approach to automatic patching of analog computer
1973-01-01
The Automatic Patching Verification program (APV) is described which provides the hybrid computer programmer with a convenient method of performing a static check of the analog portion of his study. The static check insures that the program is patched as specified, and that the computing components being used are operating correctly. The APV language the programmer uses to specify his conditions and interconnections is similar to the FORTRAN language in syntax. The APV control program reads APV source program statements from an assigned input device. Each source program statement is processed immediately after it is read. A statement may select an analog console, set an analog mode, set a potentiometer or DAC, or read from the analog console and perform a test. Statements are read and processed sequentially. If an error condition is detected, an output occurs on an assigned output device. When an end statement is read, the test is terminated.
New Computational Approach to Electron Transport in Irregular Graphene Nanostructures
Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey
2009-03-01
For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.
A Neural Information Field Approach to Computational Cognition
2016-11-18
effects of distraction during list memory . These distractions include short and long delays before recall, and continuous distraction (forced rehearsal... memory encoding and replay in hippocampus. Computational Neuroscience Society (CNS), p. 166, 2014. D. A. Pinotsis, Neural Field Coding of Short Term ...performance of children learning to count in a SPA model; proposed a new SPA model of cognitive load using the N-back task; developed a new model of the
A Novel Biometric Approach for Authentication In Pervasive Computing Environments
Rachappa,; Divyajyothi M G; D H Rao
2016-01-01
The paradigm of embedding computing devices in our surrounding environment has gained more interest in recent days. Along with contemporary technology comes challenges, the most important being the security and privacy aspect. Keeping the aspect of compactness and memory constraints of pervasive devices in mind, the biometric techniques proposed for identification should be robust and dynamic. In this work, we propose an emerging scheme that is based on few exclusive human traits and characte...
Modeling Cu{sup 2+}-Aβ complexes from computational approaches
Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)
2015-09-15
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Walton, S.
1987-01-01
The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field
Cask crush pad analysis using detailed and simplified analysis methods
Uldrich, E.D.; Hawkes, B.D.
1997-01-01
A crush pad has been designed and analyzed to absorb the kinetic energy of a hypothetically dropped spent nuclear fuel shipping cask into a 44-ft. deep cask unloading pool at the Fluorinel and Storage Facility (FAST). This facility, located at the Idaho Chemical Processing Plant (ICPP) at the Idaho national Engineering and Environmental Laboratory (INEEL), is a US Department of Energy site. The basis for this study is an analysis by Uldrich and Hawkes. The purpose of this analysis was to evaluate various hypothetical cask drop orientations to ensure that the crush pad design was adequate and the cask deceleration at impact was less than 100 g. It is demonstrated herein that a large spent fuel shipping cask, when dropped onto a foam crush pad, can be analyzed by either hand methods or by sophisticated dynamic finite element analysis using computer codes such as ABAQUS. Results from the two methods are compared to evaluate accuracy of the simplified hand analysis approach
On a nonlinear Kalman filter with simplified divided difference approximation
Luo, Xiaodong; Hoteit, Ibrahim; Moroz, Irene M.
2012-01-01
We present a new ensemble-based approach that handles nonlinearity based on a simplified divided difference approximation through Stirling's interpolation formula, which is hence called the simplified divided difference filter (sDDF). The sDDF uses Stirling's interpolation formula to evaluate the statistics of the background ensemble during the prediction step, while at the filtering step the sDDF employs the formulae in an ensemble square root filter (EnSRF) to update the background to the analysis. In this sense, the sDDF is a hybrid of Stirling's interpolation formula and the EnSRF method, while the computational cost of the sDDF is less than that of the EnSRF. Numerical comparison between the sDDF and the EnSRF, with the ensemble transform Kalman filter (ETKF) as the representative, is conducted. The experiment results suggest that the sDDF outperforms the ETKF with a relatively large ensemble size, and thus is a good candidate for data assimilation in systems with moderate dimensions. © 2011 Elsevier B.V. All rights reserved.
On a nonlinear Kalman filter with simplified divided difference approximation
Luo, Xiaodong
2012-03-01
We present a new ensemble-based approach that handles nonlinearity based on a simplified divided difference approximation through Stirling\\'s interpolation formula, which is hence called the simplified divided difference filter (sDDF). The sDDF uses Stirling\\'s interpolation formula to evaluate the statistics of the background ensemble during the prediction step, while at the filtering step the sDDF employs the formulae in an ensemble square root filter (EnSRF) to update the background to the analysis. In this sense, the sDDF is a hybrid of Stirling\\'s interpolation formula and the EnSRF method, while the computational cost of the sDDF is less than that of the EnSRF. Numerical comparison between the sDDF and the EnSRF, with the ensemble transform Kalman filter (ETKF) as the representative, is conducted. The experiment results suggest that the sDDF outperforms the ETKF with a relatively large ensemble size, and thus is a good candidate for data assimilation in systems with moderate dimensions. © 2011 Elsevier B.V. All rights reserved.
Changes to a modelling approach with the use of computer
Andresen, Mette
2006-01-01
of teaching materials on differential equations. One of the objectives of the project was changes at two levels: 1) Changes at curriculum level and 2) Changes in the intentions of modelling and using models. The paper relates the changes at these two levels and discusses how the use of computer can serve......This paper reports on a Ph.D. project, which was part of a larger research- and development project (see www.matnatverdensklasse.dk). In the reported part of the project, each student had had a laptop at his disposal for at least two years. The Ph.D. project inquires the try out in four classes...
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan
2011-05-14
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.
Sinc-Approximations of Fractional Operators: A Computing Approach
Gerd Baumann
2015-06-01
Full Text Available We discuss a new approach to represent fractional operators by Sinc approximation using convolution integrals. A spin off of the convolution representation is an effective inverse Laplace transform. Several examples demonstrate the application of the method to different practical problems.
Mechanisms of Neurofeedback: A Computation-theoretic Approach.
Davelaar, Eddy J
2018-05-15
Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Novel approach for dam break flow modeling using computational intelligence
Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar
2018-04-01
A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.
A 3D computer graphics approach to brachytherapy planning.
Weichert, Frank; Wawro, Martin; Wilke, Carsten
2004-06-01
Intravascular brachytherapy (IVB) can significantly reduce the risk of restenosis after interventional treatment of stenotic arteries, if planned and applied correctly. In order to facilitate computer-based IVB planning, a three-dimensional reconstruction of the stenotic artery based on intravascular ultrasound (IVUS) sequences is desirable. For this purpose, the frames of the IVUS sequence are properly aligned in space, possible gaps inbetween the IVUS frames are filled by interpolation with radial basis functions known from scattered data interpolation. The alignment procedure uses additional information which is obtained from biplane X-ray angiography performed simultaneously during the capturing of the IVUS sequence. After IVUS images and biplane angiography data are acquired from the patient, the vessel-wall borders and the IVUS catheter are detected by an active contour algorithm. Next, the twist (relative orientation) between adjacent IVUS frames is determined by a sequential triangulation method. The absolute orientation of each frame is established by a stochastic analysis based on anatomical landmarks. Finally, the reconstructed 3D vessel model is visualized by methods of combined volume and polygon rendering. The reconstruction is then used for the computation of the radiation-distribution within the tissue, emitted from a beta-radiation source. All these steps are performed during the percutaneous intervention.
A simplified model for calculating early offsite consequences from nuclear reactor accidents
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1988-07-01
A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
A Computational Approach to the Quantification of Animal Camouflage
2014-06-01
and Norm Farr, for providing great feedback on my research and encouragement along the way. Finally, I thank my dad and my sister, for their love...that live different habitats. Another approach, albeit logistically difficult, would be to transport cuttlefish native to a chromatically poor ...habitat to a chromatically rich habitat. Many such challenges remain in the field of sensory ecology, not just of cephalopods in marine habitats but many
McFedries, Paul
2013-01-01
A friendly, visual approach to learning the basics of Excel 2013 As the world's leading spreadsheet program, Excel is a spreadsheet and data analysis tool that is part of the Microsoft Office suite. The new Excel 2013 includes new features and functionalities that require users of older versions to re-learn the application. However, whether you're switching from an earlier version or learning Excel for the first time, this easy-to-follow visual guide gets you going with Excel 2013 quickly and easily. Numbered steps as well as full-color screen shots, concise information, and helpfu
Engineering approach to model and compute electric power markets settlements
Kumar, J.; Petrov, V.
2006-01-01
Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs
Computational approaches to identify functional genetic variants in cancer genomes
Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris
2013-01-01
The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu......The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result...... of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....
Computer aided fixture design - A case based approach
Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom
2017-11-01
Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.
Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...
Nicholson, Anita; Tobin, Mary
2006-01-01
This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.
A dynamical-systems approach for computing ice-affected streamflow
Holtschlag, David J.
1996-01-01
A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.
Electropolishing-Orthodontic Office: A Simplified Approach
Sudhir Munjal
2014-01-01
Full Text Available Electropolishing plays an important role in dentistry by providing enhanced mechanical properties, better corrosion protection, physical appearance and ease of cleaning various metallic attachments. To achieve all these objectives, we present here a simple and economical way to fabricate a electropolisher which has wide applications in orthodontic office (recycle brackets, anneal retention wires, etc..
Harris Recurrence and MCMC: A Simplified Approach
Asmussen, Søren; Glynn, Peter W.
A key result underlying the theory of MCMC is that any η-irreducible Markov chain having a transition density with respect to η and possessing a stationary distribution is automatically positive Harris recurrent. This paper provides a short self-contained proof of this fact.......A key result underlying the theory of MCMC is that any η-irreducible Markov chain having a transition density with respect to η and possessing a stationary distribution is automatically positive Harris recurrent. This paper provides a short self-contained proof of this fact....
Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.
Scheeline, Alexander; Mork, Brian J.
1988-01-01
Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)
Chronic Meningitis: Simplifying a Diagnostic Challenge.
Baldwin, Kelly; Whiting, Chris
2016-03-01
Chronic meningitis can be a diagnostic dilemma for even the most experienced clinician. Many times, the differential diagnosis is broad and encompasses autoimmune, neoplastic, and infectious etiologies. This review will focus on a general approach to chronic meningitis to simplify the diagnostic challenges many clinicians face. The article will also review the most common etiologies of chronic meningitis in some detail including clinical presentation, diagnostic testing, treatment, and outcomes. By using a case-based approach, we will focus on the key elements of clinical presentation and laboratory analysis that will yield the most rapid and accurate diagnosis in these complicated cases.
Perturbation approach for nuclear magnetic resonance solid-state quantum computation
G. P. Berman
2003-01-01
Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.
Simplified model of a PWR primary circuit
Souza, A.L.; Faya, A.J.G.
1988-07-01
The computer program RENUR was developed to perform a very simplified simulation of a typical PWR primary circuit. The program has mathematical models for the thermal-hydraulics of the reactor core and the pressurizer, the rest of the circuit being treated as a single volume. Heat conduction in the fuel rod is analyzed by a nodal model. Average and hot channels are treated so that bulk response of the core and DNBR can be evaluated. A homogenenous model is employed in the pressurizer. Results are presented for a steady-state situation as well as for a loss of load transient. Agreement with the results of more elaborate computer codes is good with substantial reduction in computer costs. (author) [pt
Data analysis of asymmetric structures advanced approaches in computational statistics
Saito, Takayuki
2004-01-01
Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.
Safe manning of merchant ships: an approach and computer tool
Alapetite, Alexandre; Kozin, Igor
2017-01-01
In the shipping industry, staffing expenses have become a vital competition parameter. In this paper, an approach and a software tool are presented to support decisions on the staffing of merchant ships. The tool is implemented in the form of a Web user interface that makes use of discrete......-event simulation and allows estimation of the workload and of whether different scenarios are successfully performed taking account of the number of crewmembers, watch schedules, distribution of competencies, and others. The software library ‘SimManning’ at the core of the project is provided as open source...
A Novel Approach for ATC Computation in Deregulated Environment
C. K. Babulal
2006-09-01
Full Text Available This paper presents a novel method for determination of Available Transfer Capability (ATC based on fuzzy logic. Adaptive Neuro-Fuzzy Inference System (ANFIS is used to determine the step length of Homotophy continuation power flow method by considering the values of load bus voltage and change in load bus voltage. The approach is compared with the already available method. The proposed method determines ATC for various transactions by considering thermal limit, voltage limit and static voltage stability limit and tested in WSCC 9 bus system, New England 39 bus system and Indian 181 bus system
Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan
2001-01-01
The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...
A survey on computational intelligence approaches for predictive modeling in prostate cancer
Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG
2017-01-01
Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...
A computational Bayesian approach to dependency assessment in system reliability
Yontay, Petek; Pan, Rong
2016-01-01
Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.
Simplified analysis for liquid pathway studies
Codell, R.B.
1984-08-01
The analysis of the potential contamination of surface water via groundwater contamination from severe nuclear accidents is routinely calculated during licensing reviews. This analysis is facilitated by the methods described in this report, which is codified into a BASIC language computer program, SCREENLP. This program performs simplified calculations for groundwater and surface water transport and calculates population doses to potential users for the contaminated water irrespective of possible mitigation methods. The results are then compared to similar analyses performed using data for the generic sites in NUREG-0440, Liquid Pathway Generic Study, to determine if the site being investigated would pose any unusual liquid pathway hazards
Simplified scheme or radioactive plume calculations
Gibson, T.A.; Montan, D.N.
1976-01-01
A simplified mathematical scheme to estimate external whole-body γ radiation exposure rates from gaseous radioactive plumes was developed for the Rio Blanco Gas Field Nuclear Stimulation Experiment. The method enables one to calculate swiftly, in the field, downwind exposure rates knowing the meteorological conditions and γ radiation exposure rates measured by detectors positioned near the plume source. The method is straightforward and easy to use under field conditions without the help of mini-computers. It is applicable to a wide range of radioactive plume situations. It should be noted that the Rio Blanco experiment was detonated on May 17, 1973, and no seep or release of radioactive material occurred
Simplified methodology for Angra 1 containment analysis
Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.
1991-08-01
A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminary evaluation of the Angra 1 global parameters. (author)
A functional analytic approach to computer-interactive mathematics.
Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.
Computationally efficient model predictive control algorithms a neural network approach
Ławryńczuk, Maciej
2014-01-01
This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: · A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. · Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. · The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). · The MPC algorithms with neural approximation with no on-line linearization. · The MPC algorithms with guaranteed stability and robustness. · Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...
Granular computing and decision-making interactive and iterative approaches
Chen, Shyi-Ming
2015-01-01
This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems. The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies. The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...
Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach
Seth A. Herd
2013-01-01
Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.
Promises and Pitfalls of Computer-Supported Mindfulness: Exploring a Situated Mobile Approach
Ralph Vacca
2017-12-01
Full Text Available Computer-supported mindfulness (CSM is a burgeoning area filled with varied approaches such as mobile apps and EEG headbands. However, many of the approaches focus on providing meditation guidance. The ubiquity of mobile devices may provide new opportunities to support mindfulness practices that are more situated in everyday life. In this paper, a new situated mindfulness approach is explored through a specific mobile app design. Through an experimental design, the approach is compared to traditional audio-based mindfulness meditation, and a mind wandering control, over a one-week period. The study demonstrates the viability for a situated mobile mindfulness approach to induce mindfulness states. However, phenomenological aspects of the situated mobile approach suggest both promises and pitfalls for computer-supported mindfulness using a situated approach.
Advanced Computational Modeling Approaches for Shock Response Prediction
Derkevorkian, Armen; Kolaini, Ali R.; Peterson, Lee
2015-01-01
Motivation: (1) The activation of pyroshock devices such as explosives, separation nuts, pin-pullers, etc. produces high frequency transient structural response, typically from few tens of Hz to several hundreds of kHz. (2) Lack of reliable analytical tools makes the prediction of appropriate design and qualification test levels a challenge. (3) In the past few decades, several attempts have been made to develop methodologies that predict the structural responses to shock environments. (4) Currently, there is no validated approach that is viable to predict shock environments overt the full frequency range (i.e., 100 Hz to 10 kHz). Scope: (1) Model, analyze, and interpret space structural systems with complex interfaces and discontinuities, subjected to shock loads. (2) Assess the viability of a suite of numerical tools to simulate transient, non-linear solid mechanics and structural dynamics problems, such as shock wave propagation.
A zero-dimensional approach to compute real radicals
Silke J. Spang
2008-04-01
Full Text Available The notion of real radicals is a fundamental tool in Real Algebraic Geometry. It takes the role of the radical ideal in Complex Algebraic Geometry. In this article I shall describe the zero-dimensional approach and efficiency improvement I have found during the work on my diploma thesis at the University of Kaiserslautern (cf. [6]. The main focus of this article is on maximal ideals and the properties they have to fulfil to be real. New theorems and properties about maximal ideals are introduced which yield an heuristic prepare_max which splits the maximal ideals into three classes, namely real, not real and the class where we can't be sure whether they are real or not. For the latter we have to apply a coordinate change into general position until we are sure about realness. Finally this constructs a randomized algorithm for real radicals. The underlying theorems and algorithms are described in detail.
Data science in R a case studies approach to computational reasoning and problem solving
Nolan, Deborah
2015-01-01
Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar
Simple and practical approach for computing the ray Hessian matrix in geometrical optics.
Lin, Psang Dain
2018-02-01
A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.
Effects of artificial gravity on the cardiovascular system: Computational approach
Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.
2016-09-01
steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.
Implementation of a Novel Educational Modeling Approach for Cloud Computing
Sara Ouahabi
2014-12-01
Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.
A Hybrid Soft Computing Approach for Subset Problems
Broderick Crawford
2013-01-01
Full Text Available Subset problems (set partitioning, packing, and covering are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S can be partitioned into smaller subsets. All items in S must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions and set covering (all items must be contained in at least one partition. Here, we present a hybrid solver based on ant colony optimization (ACO combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.
Driving profile modeling and recognition based on soft computing approach.
Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya
2009-04-01
Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.
Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus
2014-01-01
We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.
A scalable approach to modeling groundwater flow on massively parallel computers
Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.
1995-12-01
We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model
Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen
2013-01-01
Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…
20 CFR 404.241 - 1977 simplified old-start method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false 1977 simplified old-start method. 404.241... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary Insurance Amounts § 404.241 1977 simplified old-start method. (a) Who is qualified. To qualify for the old...
Energy-aware memory management for embedded multimedia systems a computer-aided design approach
Balasa, Florin
2011-01-01
Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques
Simplified High-Power Inverter
Edwards, D. B.; Rippel, W. E.
1984-01-01
Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.
Bonnefoi, H; Litière, S; Piccart, M; MacGrogan, G; Fumoleau, P; Brain, E; Petit, T; Rouanet, P; Jassem, J; Moldovan, C; Bodmer, A; Zaman, K; Cufer, T; Campone, M; Luporsi, E; Malmström, P; Werutsky, G; Bogaerts, J; Bergh, J; Cameron, D A
2014-06-01
Pathological complete response (pCR) following chemotherapy is strongly associated with both breast cancer subtype and long-term survival. Within a phase III neoadjuvant chemotherapy trial, we sought to determine whether the prognostic implications of pCR, TP53 status and treatment arm (taxane versus non-taxane) differed between intrinsic subtypes. Patients were randomized to receive either six cycles of anthracycline-based chemotherapy or three cycles of docetaxel then three cycles of eprirubicin/docetaxel (T-ET). pCR was defined as no evidence of residual invasive cancer (or very few scattered tumour cells) in primary tumour and lymph nodes. We used a simplified intrinsic subtypes classification, as suggested by the 2011 St Gallen consensus. Interactions between pCR, TP53 status, treatment arm and intrinsic subtype on event-free survival (EFS), distant metastasis-free survival (DMFS) and overall survival (OS) were studied using a landmark and a two-step approach multivariate analyses. Sufficient data for pCR analyses were available in 1212 (65%) of 1856 patients randomized. pCR occurred in 222 of 1212 (18%) patients: 37 of 496 (7.5%) luminal A, 22 of 147 (15%) luminal B/HER2 negative, 51 of 230 (22%) luminal B/HER2 positive, 43 of 118 (36%) HER2 positive/non-luminal, 69 of 221(31%) triple negative (TN). The prognostic effect of pCR on EFS did not differ between subtypes and was an independent predictor for better EFS [hazard ratio (HR) = 0.40, P analysis. EORTC 10994/BIG 1-00 Trial registration number NCT00017095. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Simplified stock markets described by number operators
Bagarello, F.
2009-06-01
In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.
Cultural Distance-Aware Service Recommendation Approach in Mobile Edge Computing
Yan Li
2018-01-01
Full Text Available In the era of big data, traditional computing systems and paradigms are not efficient and even difficult to use. For high performance big data processing, mobile edge computing is emerging as a complement framework of cloud computing. In this new computing architecture, services are provided within a close proximity of mobile users by servers at the edge of network. Traditional collaborative filtering recommendation approach only focuses on the similarity extracted from the rating data, which may lead to an inaccuracy expression of user preference. In this paper, we propose a cultural distance-aware service recommendation approach which focuses on not only the similarity but also the local characteristics and preference of users. Our approach employs the cultural distance to express the user preference and combines it with similarity to predict the user ratings and recommend the services with higher rating. In addition, considering the extreme sparsity of the rating data, missing rating prediction based on collaboration filtering is introduced in our approach. The experimental results based on real-world datasets show that our approach outperforms the traditional recommendation approaches in terms of the reliability of recommendation.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2015-01-01
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)
Medical imaging in clinical applications algorithmic and computer-based approaches
Bhateja, Vikrant; Hassanien, Aboul
2016-01-01
This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .
Yan Jun; Yao Qingshan
1999-01-01
Virtual reality is a computer based system for creating and receiving virtual world. As an emerging branch of computer discipline, this approach is extensively expanding and widely used in variety of industries such as national defence, research, engineering, medicine and air navigation. The author intends to present the fundamentals of virtual reality, in attempt to study some interested aspects for use in nuclear power emergency planning
1997-12-01
that I’ll turn my attention to that computer game we’ve talked so much about... Dave Van Veldhuizen and Scott Brown (soon-to-be Drs. Van Veldhuizen and...Industry Training Systems Conference. 1988. 37. Van Veldhuizen , D. A. and L. J Hutson. "A Design Methodology for Domain Inde- pendent Computer...proposed by Van Veld- huizen and Hutson (37), extends the general architecture to support both a domain- independent approach to implementing CGFs and
D-Wave's Approach to Quantum Computing: 1000-qubits and Counting!
CERN. Geneva
2017-01-01
In this talk I will describe D-Wave's approach to quantum computing, including the system architecture of our 1000-qubit D-Wave 2X, its programming model, and performance benchmarks. Furthermore, I will describe how the native optimization and sampling capabilities of the quantum processor can be exploited to tackle problems in a variety of fields including medicine, machine learning, physics, and computational finance.
Mathematics of shape description a morphological approach to image processing and computer graphics
Ghosh, Pijush K
2009-01-01
Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book. Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...
A Cognitive Computing Approach for Classification of Complaints in the Insurance Industry
Forster, J.; Entrup, B.
2017-10-01
In this paper we present and evaluate a cognitive computing approach for classification of dissatisfaction and four complaint specific complaint classes in correspondence documents between insurance clients and an insurance company. A cognitive computing approach includes the combination classical natural language processing methods, machine learning algorithms and the evaluation of hypothesis. The approach combines a MaxEnt machine learning algorithm with language modelling, tf-idf and sentiment analytics to create a multi-label text classification model. The result is trained and tested with a set of 2500 original insurance communication documents written in German, which have been manually annotated by the partnering insurance company. With a F1-Score of 0.9, a reliable text classification component has been implemented and evaluated. A final outlook towards a cognitive computing insurance assistant is given in the end.
Mason, Eric; Van Rompaey, Jason; Carrau, Ricardo; Panizza, Benedict; Solares, C Arturo
2014-03-01
Advances in the field of skull base surgery aim to maximize anatomical exposure while minimizing patient morbidity. The petroclival region of the skull base presents numerous challenges for surgical access due to the complex anatomy. The transcochlear approach to the region provides adequate access; however, the resection involved sacrifices hearing and results in at least a grade 3 facial palsy. An endoscopic endonasal approach could potentially avoid negative patient outcomes while providing a desirable surgical window in a select patient population. Cadaveric study. Endoscopic access to the petroclival region was achieved through an endonasal approach. For comparison, a transcochlear approach to the clivus was performed. Different facets of the dissections, such as bone removal volume and exposed surface area, were computed using computed tomography analysis. The endoscopic endonasal approach provided a sufficient corridor to the petroclival region with significantly less bone removal and nearly equivalent exposure of the surgical target, thus facilitating the identification of the relevant anatomy. The lateral approach allowed for better exposure from a posterolateral direction until the inferior petrosal sinus; however, the endonasal approach avoided labyrinthine/cochlear destruction and facial nerve manipulation while providing an anteromedial viewpoint. The endonasal approach also avoided external incisions and cosmetic deficits. The endonasal approach required significant sinonasal resection. Endoscopic access to the petroclival region is a feasible approach. It potentially avoids hearing loss, facial nerve manipulation, and cosmetic damage. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
The soft computing-based approach to investigate allergic diseases: a systematic review.
Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano
2017-01-01
Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.
A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises
Kuada, Eric; Olesen, Henning
2011-01-01
This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...... will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need...
Barquin, J.; Centeno, E.; Reneses, J.
2004-01-01
The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)
A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems
Aleksander Byrski
2010-01-01
Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.
On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems
Junge, Oliver; Kevrekidis, Ioannis G.
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
A Computer-Aided FPS-Oriented Approach for Construction Briefing
Xiaochun Luo; Qiping Shen
2008-01-01
Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.
Kristoffer Carl Aberg
Full Text Available Learning how to gain rewards (approach learning and avoid punishments (avoidance learning is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance learning scored higher on measures of approach (vs. avoidance trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.
Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807
A New Approach to Practical Active-Secure Two-Party Computation
Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio
2012-01-01
We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction....
Wenger, Etienne
2014-01-01
Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic
Vella, Michael; Cannon, Robert C; Crook, Sharon; Davison, Andrew P; Ganapathy, Gautham; Robinson, Hugh P C; Silver, R Angus; Gleeson, Padraig
2014-01-01
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.
Michael eVella
2014-04-01
Full Text Available NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell,and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two APIs (Application Programming Interfaces written in Python (http://www.python.org, which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.
Simplified phenomenology for colored dark sectors
Hedri, Sonia El; Kaminska, Anna; Vries, Maikel de [PRISMA Cluster of Excellence & Mainz Institute for Theoretical Physics,Johannes Gutenberg University,55099 Mainz (Germany); Zurita, Jose [Institute for Nuclear Physics (IKP), Karlsruhe Institute of Technology,Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Institute for Theoretical Particle Physics (TTP), Karlsruhe Institute of Technology,Engesserstraße 7, D-76128 Karlsruhe (Germany)
2017-04-20
We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.
A new type of simplified fuzzy rule-based system
Angelov, Plamen; Yager, Ronald
2012-02-01
Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.
Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis
Nikkilä, J.; Vos, de W.M.
2010-01-01
GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex
Can Computers Be Used for Whole Language Approaches to Reading and Language Arts?
Balajthy, Ernest
Holistic approaches to the teaching of reading and writing, most notably the Whole Language movement, reject the philosophy that language skills can be taught. Instead, holistic teachers emphasize process, and they structure the students' classroom activities to be rich in language experience. Computers can be used as tools for whole language…
How people learn while playing serious games: A computational modelling approach
Westera, Wim
2017-01-01
This paper proposes a computational modelling approach for investigating the interplay of learning and playing in serious games. A formal model is introduced that allows for studying the details of playing a serious game under diverse conditions. The dynamics of player action and motivation is based
Yeh, Duen-Yian; Cheng, Ching-Hsue
2016-01-01
This study examined the relationships among children's computer game use, academic achievement and parental governing approach to propose probable answers for the doubts of Taiwanese parents. 355 children (ages 11-14) were randomly sampled from 20 elementary schools in a typically urbanised county in Taiwan. Questionnaire survey (five questions)…
A Computer-Based Game That Promotes Mathematics Learning More than a Conventional Approach
McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.; Forlizzi, Jodi
2017-01-01
Excitement about learning from computer-based games has been papable in recent years and has led to the development of many educational games. However, there are relatively few sound empirical studies in the scientific literature that have shown the benefits of learning mathematics from games as opposed to more traditional approaches. The…
Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach
Hedrick, C.E.
1976-01-01
The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized
Corrado Lodovico Galli
Full Text Available Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs and their metabolites towards the ligand binding domain (LBD of the androgen receptor (AR in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three. This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.
Galli, Corrado Lodovico; Sensi, Cristina; Fumagalli, Amos; Parravicini, Chiara; Marinovich, Marina; Eberini, Ivano
2014-01-01
Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.
Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor
Pustovetov, M. Yu
2018-03-01
This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.
Combinatorial computational chemistry approach to the design of metal catalysts for deNOx
Endou, Akira; Jung, Changho; Kusagaya, Tomonori; Kubo, Momoji; Selvam, Parasuraman; Miyamoto, Akira
2004-01-01
Combinatorial chemistry is an efficient technique for the synthesis and screening of a large number of compounds. Recently, we introduced the combinatorial approach to computational chemistry for catalyst design and proposed a new method called ''combinatorial computational chemistry''. In the present study, we have applied this combinatorial computational chemistry approach to the design of precious metal catalysts for deNO x . As the first step of the screening of the metal catalysts, we studied Rh, Pd, Ag, Ir, Pt, and Au clusters regarding the adsorption properties towards NO molecule. It was demonstrated that the energetically most stable adsorption state of NO on Ir model cluster, which was irrespective of both the shape and number of atoms including the model clusters
Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.
Montalvo-Acosta, Joel José; Cecchini, Marco
2016-12-01
The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ceRNAs in plants: computational approaches and associated challenges for target mimic research.
Paschoal, Alexandre Rossi; Lozada-Chávez, Irma; Domingues, Douglas Silva; Stadler, Peter F
2017-05-30
The competing endogenous RNA hypothesis has gained increasing attention as a potential global regulatory mechanism of microRNAs (miRNAs), and as a powerful tool to predict the function of many noncoding RNAs, including miRNAs themselves. Most studies have been focused on animals, although target mimic (TMs) discovery as well as important computational and experimental advances has been developed in plants over the past decade. Thus, our contribution summarizes recent progresses in computational approaches for research of miRNA:TM interactions. We divided this article in three main contributions. First, a general overview of research on TMs in plants is presented with practical descriptions of the available literature, tools, data, databases and computational reports. Second, we describe a common protocol for the computational and experimental analyses of TM. Third, we provide a bioinformatics approach for the prediction of TM motifs potentially cross-targeting both members within the same or from different miRNA families, based on the identification of consensus miRNA-binding sites from known TMs across sequenced genomes, transcriptomes and known miRNAs. This computational approach is promising because, in contrast to animals, miRNA families in plants are large with identical or similar members, several of which are also highly conserved. From the three consensus TM motifs found with our approach: MIM166, MIM171 and MIM159/319, the last one has found strong support on the recent experimental work by Reichel and Millar [Specificity of plant microRNA TMs: cross-targeting of mir159 and mir319. J Plant Physiol 2015;180:45-8]. Finally, we stress the discussion on the major computational and associated experimental challenges that have to be faced in future ceRNA studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A New Approach to Practical Active-Secure Two-Party Computation
Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio
2011-01-01
We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction. We also report on an implementation of this approach, that shows that our protocol is more efficient than any previous one: For big enough circuits, we can evaluate more than 20000 Boolean gates per second...
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Drake, Jeffrey T.; Prasad, Nadipuram R.
1999-01-01
This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.
Cristian Toma
2013-01-01
Full Text Available This study presents wavelets-computational aspects of Sterian-realistic approach to uncertainty principle in high energy physics. According to this approach, one cannot make a device for the simultaneous measuring of the canonical conjugate variables in reciprocal Fourier spaces. However, such aspects regarding the use of conjugate Fourier spaces can be also noticed in quantum field theory, where the position representation of a quantum wave is replaced by momentum representation before computing the interaction in a certain point of space, at a certain moment of time. For this reason, certain properties regarding the switch from one representation to another in these conjugate Fourier spaces should be established. It is shown that the best results can be obtained using wavelets aspects and support macroscopic functions for computing (i wave-train nonlinear relativistic transformation, (ii reflection/refraction with a constant shift, (iii diffraction considered as interaction with a null phase shift without annihilation of associated wave, (iv deflection by external electromagnetic fields without phase loss, and (v annihilation of associated wave-train through fast and spatially extended phenomena according to uncertainty principle.
Simplified discrete ordinates method in spherical geometry
Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.
1999-01-01
The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations
Simplifying the circuit of Josephson parametric converters
Abdo, Baleegh; Brink, Markus; Chavez-Garcia, Jose; Keefe, George
Josephson parametric converters (JPCs) are quantum-limited three-wave mixing devices that can play various important roles in quantum information processing in the microwave domain, including amplification of quantum signals, transduction of quantum information, remote entanglement of qubits, nonreciprocal amplification, and circulation of signals. However, the input-output and biasing circuit of a state-of-the-art JPC consists of bulky components, i.e. two commercial off-chip broadband 180-degree hybrids, four phase-matched short coax cables, and one superconducting magnetic coil. Such bulky hardware significantly hinders the integration of JPCs in scalable quantum computing architectures. In my talk, I will present ideas on how to simplify the JPC circuit and show preliminary experimental results
Templet Web: the use of volunteer computing approach in PaaS-style cloud
Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil
2018-03-01
This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
seyyed mohammad zargar
2018-03-01
Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.
Templet Web: the use of volunteer computing approach in PaaS-style cloud
Vostokin Sergei
2018-03-01
Full Text Available This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a the implementation of “on-demand” access; (b source code deployment management; (c high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C
Vladimiras Dolgopolovas
2015-01-01
Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes
Dib M.
2018-01-01
Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.
Simplified design of IC amplifiers
Lenk, John
1996-01-01
Simplified Design of IC Amplifiers has something for everyone involved in electronics. No matter what skill level, this book shows how to design and experiment with IC amplifiers. For experimenters, students, and serious hobbyists, this book provides sufficient information to design and build IC amplifier circuits from 'scratch'. For working engineers who design amplifier circuits or select IC amplifiers, the book provides a variety of circuit configurations to make designing easier.Provides basics for all phases of practical design.Covers the most popular forms for amplif
A simplified indirect bonding technique
Radha Katiyar
2014-01-01
Full Text Available With the advent of lingual orthodontics, indirect bonding technique has become an integral part of practice. It involves placement of brackets initially on the models and then their transfer to teeth with the help of transfer trays. Problems encountered with current indirect bonding techniques used are (1 the possibility of adhesive flash remaining around the base of the brackets which requires removal (2 longer time required for the adhesive to gain enough bond strength for secure tray removal. The new simplified indirect bonding technique presented here overcomes both these problems.
Computation-aware algorithm selection approach for interlaced-to-progressive conversion
Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang
2010-05-01
We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
Simplified probabilistic risk assessment in fuel reprocessing
Solbrig, C.W.
1993-01-01
An evaluation was made to determine if a backup mass tracking computer would significantly reduce the probability of criticality in the fuel reprocessing of the Integral Fast Reactor. Often tradeoff studies, such as this, must be made that would greatly benefit from a Probably Risk Assessment (PRA). The major benefits of a complete PRA can often be accrued with a Simplified Probabilistic Risk Assessment (SPRA). An SPRA was performed by selecting a representative fuel reprocessing operation (moving a piece of fuel) for analysis. It showed that the benefit of adding parallel computers was small compared to the benefit which could be obtained by adding parallelism to two computer input steps and two of the weighing operations. The probability of an incorrect material moves with the basic process is estimated to be 4 out of 100 moves. The actual values of the probability numbers are considered accurate to within an order of magnitude. The most useful result of developing the fault trees accrue from the ability to determine where significant improvements in the process can be made. By including the above mentioned parallelism, the error move rate can be reduced to 1 out of 1000
Larbi, M.; Besnier, P.; Pecqueux, B.
2014-01-01
This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)
Vignes, J.
1986-01-01
Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr
Bebout, B.; Bebout, L. E.; Detweiler, A. M.; Everroad, R. C.; Lee, J.; Pett-Ridge, J.; Weber, P. K.
2014-12-01
Microbial mats are famously amongst the most diverse microbial ecosystems on Earth, inhabiting some of the most inclement environments known, including hypersaline, dry, hot, cold, nutrient poor, and high UV environments. The high microbial diversity of microbial mats makes studies of microbial ecology notably difficult. To address this challenge, we have been using a combination of metagenomics, metatranscriptomics, iTags and culture-based simplified microbial mats to study biogeochemical cycling (H2 production, N2 fixation, and fermentation) in microbial mats collected from Elkhorn Slough, Monterey Bay, California. Metatranscriptomes of microbial mats incubated over a diel cycle have revealed that a number of gene systems activate only during the day in Cyanobacteria, while the remaining appear to be constitutive. The dominant cyanobacterium in the mat (Microcoleus chthonoplastes) expresses several pathways for nitrogen scavenging undocumented in cultured strains, as well as the expression of two starch storage and utilization cycles. Community composition shifts in response to long term manipulations of mats were assessed using iTags. Changes in community diversity were observed as hydrogen fluxes increased in response to a lowering of sulfate concentrations. To produce simplified microbial mats, we have isolated members of 13 of the 15 top taxa from our iTag libraries into culture. Simplified microbial mats and simple co-cultures and consortia constructed from these isolates reproduce many of the natural patterns of biogeochemical cycling in the parent natural microbial mats, but against a background of far lower overall diversity, simplifying studies of changes in gene expression (over the short term), interactions between community members, and community composition changes (over the longer term), in response to environmental forcing.
Wilianto Wilianto
2015-10-01
Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54 ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440 bits/s. With cloud computing, the average delay is 45 ms, load 2 800 bits/s, CPU utilization 0.03%, and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39 ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP server 100 bits/sec, and HTTP server 95 bits/s. Thus, the delay, the load, and the CPU utilization decrease; but, the throughput increases. Information technology service management with cloud computing approach has better performance.
Pezner, R.D.; Findley, D.O.
1981-01-01
Orthogonal field arrangements are usually employed to irradiate a tumor volume which includes a tracheostomy stoma or the hypopharynx. This approach may produce a significantly greater dose than intended to a small segment of the cervical spinal cord because of field overlap at depth from divergence of the beams. Various sophisticated approaches have been proposed to compensate for this overlap. All require marked precision in reproducing the fields on a daily basis. We propose a simplified approach of initially irradiating the entire treatment volume by anterior and posterior opposed fields. Opposed lateral fields that exclude the spinal cord would then provide local boost treatment. A case example and computer-generated isodose curves are presented
Electronic Handbooks Simplify Process Management
2012-01-01
Getting a multitude of people to work together to manage processes across many organizations for example, flight projects, research, technologies, or data centers and others is not an easy task. Just ask Dr. Barry E. Jacobs, a research computer scientist at Goddard Space Flight Center. He helped NASA develop a process management solution that provided documenting tools for process developers and participants to help them quickly learn, adapt, test, and teach their views. Some of these tools included editable files for subprocess descriptions, document descriptions, role guidelines, manager worksheets, and references. First utilized for NASA's Headquarters Directives Management process, the approach led to the invention of a concept called the Electronic Handbook (EHB). This EHB concept was successfully applied to NASA's Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, among other NASA programs. Several Federal agencies showed interest in the concept, so Jacobs and his team visited these agencies to show them how their specific processes could be managed by the methodology, as well as to create mockup versions of the EHBs.
Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler
Zhou Hao; Zheng Ligang; Cen Kefa
2010-01-01
The current work presented a computational intelligence approach used for minimizing NO x emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NO x emissions characteristics modeling and NO x emissions optimization. First, an objective function aiming at estimating NO x emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NO x emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NO x emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NO x emissions minimization in actual power plant boilers.
An Approach for Indoor Path Computation among Obstacles that Considers User Dimension
Liu Liu
2015-12-01
Full Text Available People often transport objects within indoor environments, who need enough space for the motion. In such cases, the accessibility of indoor spaces relies on the dimensions, which includes a person and her/his operated objects. This paper proposes a new approach to avoid obstacles and compute indoor paths with respect to the user dimension. The approach excludes inaccessible spaces for a user in five steps: (1 compute the minimum distance between obstacles and find the inaccessible gaps; (2 group obstacles according to the inaccessible gaps; (3 identify groups of obstacles that influence the path between two locations; (4 compute boundaries for the selected groups; and (5 build a network in the accessible area around the obstacles in the room. Compared to the Minkowski sum method for outlining inaccessible spaces, the proposed approach generates simpler polygons for groups of obstacles that do not contain inner rings. The creation of a navigation network becomes easier based on these simple polygons. By using this approach, we can create user- and task-specific networks in advance. Alternatively, the accessible path can be generated on the fly before the user enters a room.
Simplified propagation of standard uncertainties
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.
1987-01-01
107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients.
Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.
1987-01-01
107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients
Simplified modeling photo-ionisation of uranium in Silva project
Bodin, B.; Pourre-Brichot, P.; Valadier, L.
2000-01-01
SILVA is a process which targets 235 U by photo-ionization. It is therefore important to compute the proportion of ionized atoms depending on the properties of the lasers. The interaction between atoms and lasers occurs via the link between the Maxwell and Schroedinger equations. This kind of approach is only feasible for a few simple cases: e.g. wave plane or simple laser profiling. Introducing the characteristics of SILVA, computation time increases substantially (several hundred days per kilogram of vapor). To circumvent this problem, we wrote a program (Jackpot) that treats photo-ionization by a simplified model: kinetics equations. However, various optical components were introduced with absorption factor by wavelength, to account for the effects of optics systems on the trajectory. Instead of seeking the complex wavefunction solutions of the Maxwell-Schroedinger equations, we solve a system where the unknown values are a set of populations. The size of the set depends only on the number of hold points in the process. Recent work shows that we can converge towards the same results as the Maxwell-Schroedinger system if we can fit the cross-sections of the kinetic system correctly. As to the optical aspect, Jackpot can handle diffraction. In this case, it solves the propagation equation of an electric field by a double Fourier transform method. For interactions with mirrors, the new direction of a ray is calculated with Descartes law, applying a numerical phase mask to the electric field. We account for diaphragmation mechanisms as well as the absorption law for each mirror, by a real factor by wavelength. Jackpot is simple to use and can be used to predict experimental results. Jackpot is now a library calling by a script written in Python. Changes are being made for a closer approach to reality (real laser, new photo-ionization model)
Simplified polynomial digital predistortion for multimode software defined radios
Kardaras, Georgios; Soler, José; Dittmann, Lars
2010-01-01
a simplified approach using polynomial digital predistortion in the intermediated frequency (IF) domain. It is fully implementable in software and no hardware changes are required on the digital or analog platform. The adaptation algorithm selected was Least Mean Squares because of its relevant simplicity...
A direct approach to fault-tolerance in measurement-based quantum computation via teleportation
Silva, Marcus; Danos, Vincent; Kashefi, Elham; Ollivier, Harold
2007-01-01
We discuss a simple variant of the one-way quantum computing model (Raussendorf R and Briegel H-J 2001 Phys. Rev. Lett. 86 5188), called the Pauli measurement model, where measurements are restricted to be along the eigenbases of the Pauli X and Y operators, while qubits can be initially prepared both in the vertical bar + π/4 > := 1/√2( vertical bar 0> + e i(π/4) vertical bar 1>) state and the usual vertical bar +> := 1/√2 ( vertical bar 0 > + vertical bar 1>) state. We prove the universality of this quantum computation model, and establish a standardization procedure which permits all entanglement and state preparation to be performed at the beginning of computation. This leads us to develop a direct approach to fault-tolerance by simple transformations of the entanglement graph and preparation operations, while error correction is performed naturally via syndrome-extracting teleportations
Combustion Safety Simplified Test Protocol Field Study
Brand, L. [Gas Technology Inst., Des Plaines, IL (United States); Cautley, D. [Gas Technology Inst., Des Plaines, IL (United States); Bohac, D. [Gas Technology Inst., Des Plaines, IL (United States); Francisco, P. [Gas Technology Inst., Des Plaines, IL (United States); Shen, L. [Gas Technology Inst., Des Plaines, IL (United States); Gloss, S. [Gas Technology Inst., Des Plaines, IL (United States)
2015-11-01
Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project team collected field data on 11 houses in 2015.
Computational Approach for Studying Optical Properties of DNA Systems in Solution
Nørby, Morten Steen; Svendsen, Casper Steinmann; Olsen, Jógvan Magnus Haugaard
2016-01-01
In this paper we present a study of the methodological aspects regarding calculations of optical properties for DNA systems in solution. Our computational approach will be built upon a fully polarizable QM/MM/Continuum model within a damped linear response theory framework. In this approach...... the environment is given a highly advanced description in terms of the electrostatic potential through the polarizable embedding model. Furthermore, bulk solvent effects are included in an efficient manner through a conductor-like screening model. With the aim of reducing the computational cost we develop a set...... of averaged partial charges and distributed isotropic dipole-dipole polarizabilities for DNA suitable for describing the classical region in ground-state and excited-state calculations. Calculations of the UV-spectrum of the 2-aminopurine optical probe embedded in a DNA double helical structure are presented...
Approach and tool for computer animation of fields in electrical apparatus
Miltchev, Radoslav; Yatchev, Ivan S.; Ritchie, Ewen
2002-01-01
The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)
Mittra, R.; Rushdi, A.
1979-01-01
An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.
Collins, J.D.; Hudson, J.M.; Chrostowski, J.D.
1979-02-01
A computational methodology is presented for the prediction of core melt probabilities in a nuclear power plant due to earthquake events. The proposed model has four modules: seismic hazard, structural dynamic (including soil-structure interaction), component failure and core melt sequence. The proposed modules would operate in series and would not have to be operated at the same time. The basic statistical approach uses a Monte Carlo simulation to treat random and systematic error but alternate statistical approaches are permitted by the program design
Hsu, Ching-Kun; Hwang, Gwo-Jen
2014-01-01
Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…
Simplified compact containment BWR plant
Heki, H.; Nakamaru, M.; Tsutagawa, M.; Hiraiwa, K.; Arai, K.; Hida, T.
2004-01-01
The reactor concept considered in this paper has a small power output, a compact containment and a simplified BWR configuration with comprehensive safety features. The Compact Containment Boiling Water Reactor (CCR), which is being developed with matured BWR technologies together with innovative systems/components, is expected to prove attractive in the world energy markets due to its flexibility in regard to both energy demands and site conditions, its high potential for reducing investment risk and its safety features facilitating public acceptance. The flexibility is achieved by CCR's small power output of 300 MWe class and capability of long operating cycle (refueling intervals). CCR is expected to be attractive from view point of investment due to its simplification/innovation in design such as natural circulation core cooling with the bottom located short core, internal upper entry control rod drives (CRDs) with ring-type dryers and simplified ECCS system with high pressure containment concept. The natural circulation core eliminates recirculation pumps and the maintenance of such pumps. The internal upper entry CRDs reduce the height of the reactor vessel (RPV) and consequently reduce the height of the primary containment vessel (PCV). The safety features mainly consist of large water inventory above the core without large penetration below the top of the core, passive cooling system by isolation condenser (IC), passive auto catalytic recombiner and in-vessel retention (IVR) capability. The large inventory increases the system response time in the case of design-base accidents, including loss of coolant accidents. The IC suppresses PCV pressure by steam condensation without any AC power. The recombiner decreases hydrogen concentration in the PCV in the case of a severe accident. Cooling the molten core inside the RPV if the core should be damaged by loss of core coolability could attain the IVR. The feasibility of CCR safety system has been confirmed by LOCA
Liu, Zi-Kui [Pennsylvania State University; Gleeson, Brian [University of Pittsburgh; Shang, Shunli [Pennsylvania State University; Gheno, Thomas [University of Pittsburgh; Lindwall, Greta [Pennsylvania State University; Zhou, Bi-Cheng [Pennsylvania State University; Liu, Xuan [Pennsylvania State University; Ross, Austin [Pennsylvania State University
2018-04-23
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities, which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
高文; 陈熙霖
1997-01-01
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.
Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.
Luo, Yao; Wang, Ling
2017-11-16
The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich
2014-05-01
Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.
Computational enzyme design approaches with significant biological outcomes: progress and challenges
Li, Xiaoman; Zhang, Ziding; Song, Jiangning
2012-01-01
Enzymes are powerful biocatalysts, however, so far there is still a large gap between the number of enzyme-based practical applications and that of naturally occurring enzymes. Multiple experimental approaches have been applied to generate nearly all possible mutations of target enzymes, allowing the identification of desirable variants with improved properties to meet the practical needs. Meanwhile, an increasing number of computational methods have been developed to assist in the modificati...
Reducing usage of the computational resources by event driven approach to model predictive control
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
Simplified hearing protector ratings—an international comparison
Waugh, R.
1984-03-01
A computer was programmed to model the distributions of dB(A) levels reaching the ears of an imaginary workforce wearing hearing protectors selected on the basis of either octave band attenuation values or various simplified ratings in use in Australia, Germany, Poland, Spain or the U.S.A. Both multi-valued and single-valued versions of dB(A) reduction and sound level conversion ratings were considered. Ratings were compared in terms of precision and protection rate and the comparisons were replicated for different samples of noise spectra ( N = 400) and hearing protectors ( N = 70) to establish the generality of the conclusions. Different countries adopt different approaches to the measurement of octave band attenuation values and the consequences of these differences were investigated. All rating systems have built-in correction factors to account for hearing protector performance variability and the merits of these were determined in the light of their ultimate effects on the distribution of dB(A) levels reaching wearers' ears. It was concluded that the optimum rating is one that enables the dB(A) level reaching wearers to be estimated by subtracting a single rating value from the dB(C) level of the noise environment, the rating value to be determined for a pink noise spectrum from mean minus one standard deviation octave band attenuation values with further protection rate adjustments being achieved by the use of a constant correction factor.
Simplified model for DNB analysis
Silva Filho, E.
1979-08-01
In a pressurized water nuclear reactor (PWR), the power of operation is restricted by the possibility of the occurrence of the departure from nucleate boiling called DNB (Departure from Nucleate Boiling) in the hottest channel of the core. The present work proposes a simplified model that analyses the thermal-hydraulic conditions of the coolant in the hottest channel of PWRs with the objective to evaluate BNB in this channel. For this the coupling between the hot channel and typical nominal channels assumed imposing the existence of a cross flow between these channels in a way that a uniforme pressure axial distribution results along the channels. The model is applied for Angra-I reactor and the results are compared with those of Final Safety Analysis Report (FSAR) obtained by Westinghouse through the THINC program, beeing considered satisfactory (Author) [pt
Developing a simplified consent form for biobanking.
Beskow, Laura M; Friedman, Joëlle Y; Hardy, N Chantelle; Lin, Li; Weinfurt, Kevin P
2010-10-08
Consent forms have lengthened over time and become harder for participants to understand. We sought to demonstrate the feasibility of creating a simplified consent form for biobanking that comprises the minimum information necessary to meet ethical and regulatory requirements. We then gathered preliminary data concerning its content from hypothetical biobank participants. We followed basic principles of plain-language writing and incorporated into a 2-page form (not including the signature page) those elements of information required by federal regulations and recommended by best practice guidelines for biobanking. We then recruited diabetes patients from community-based practices and randomized half (n = 56) to read the 2-page form, first on paper and then a second time on a tablet computer. Participants were encouraged to use "More information" buttons on the electronic version whenever they had questions or desired further information. These buttons led to a series of "Frequently Asked Questions" (FAQs) that contained additional detailed information. Participants were asked to identify specific sentences in the FAQs they thought would be important if they were considering taking part in a biorepository. On average, participants identified 7 FAQ sentences as important (mean 6.6, SD 14.7, range: 0-71). No one sentence was highlighted by a majority of participants; further, 34 (60.7%) participants did not highlight any FAQ sentences. Our preliminary findings suggest that our 2-page form contains the information that most prospective participants identify as important. Combining simplified forms with supplemental material for those participants who desire more information could help minimize consent form length and complexity, allowing the most substantively material information to be better highlighted and enabling potential participants to read the form and ask questions more effectively.
A simplified dynamic analysis for reactor piping systems under blowdown conditions
Chen, M.M.
1975-01-01
In the design of pipelines in a nuclear power plant for blowdown conditions, is it customary to conduct dynamic analysis of the piping system to obtain the responses and the resulting stresses. Calculations are repeated for each design modification in piping geometry or supporting system until the design codes are met. The numerical calculations are, in general, very costly and time consuming. Until now, there have been no simple means for calculating the dynamic responses for the design. The proposed method reduces the dynamic calculation to a quasi-static one, and can be beneficially used for the preliminary design. The method is followed by a complete dynamical analysis to improve the final results. The new formulations greatly simplify the numerical computation and provide design guides. When used to design a given piping system, the method saved approximately one order of magnitude of computer time. The approach can also be used for other types of structures
Conceptual design of simplified PWR
Tabata, Hiroaki
1996-01-01
The limited availability for location of nuclear power plant in Japan makes plants with higher power ratings more desirable. Having no intention of constructing medium-sized plants as a next generation standard plant, Japanese utilities are interested in applying passive technologies to large ones. So, Japanese utilities have studied large passive plants based on AP600 and SBWR as alternative future LWRs. In a joint effort to develop a new generation nuclear power plant which is more friendly to operator and maintenance personnel and is economically competitive with alternative sources of power generation, JAPC and Japanese Utilities started the study to modify AP600 and SBWR, in order to accommodate the Japanese requirements. During a six year program up to 1994, basic concepts for 1000 MWe class Simplified PWR (SPWR) and Simplified BWR (SBWR) were developed, though there still remain several areas to be improved. These studies have now stepped into the phase of reducing construction cost and searching for maximum power rating that can be attained by reasonably practical technology. These results also suggest that it is hopeful to develop a large 3-loop passive plant (∼1200 MWe). Since Korea mainly deals with PWR, this paper summarizes SPWR study. The SPWR is jointly studied by JAPC, Japanese PWR Utilities, EdF, WH and Mitsubishi Heavy Industry. Using the AP-600 reference design as a basis, we enlarged the plant size to 3-loops and added engineering features to conform with Japanese practice and Utilities' preference. The SPWR program definitively confirmed the feasibility of a passive plant with an NSSS rating about 1000 MWe and 3 loops. (J.P.N.)
Simplified predictive models for CO_{2} sequestration performance assessment
Mishra, Srikanta [Battelle Memorial Inst., Columbus, OH (United States); Ganesh, Priya [Battelle Memorial Inst., Columbus, OH (United States); Schuetter, Jared [Battelle Memorial Inst., Columbus, OH (United States); He, Jincong [Battelle Memorial Inst., Columbus, OH (United States); Jin, Zhaoyang [Battelle Memorial Inst., Columbus, OH (United States); Durlofsky, Louis J. [Battelle Memorial Inst., Columbus, OH (United States)
2015-09-30
CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessment modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin
From computer-assisted intervention research to clinical impact: The need for a holistic approach.
Ourselin, Sébastien; Emberton, Mark; Vercauteren, Tom
2016-10-01
The early days of the field of medical image computing (MIC) and computer-assisted intervention (CAI), when publishing a strong self-contained methodological algorithm was enough to produce impact, are over. As a community, we now have substantial responsibility to translate our scientific progresses into improved patient care. In the field of computer-assisted interventions, the emphasis is also shifting from the mere use of well-known established imaging modalities and position trackers to the design and combination of innovative sensing, elaborate computational models and fine-grained clinical workflow analysis to create devices with unprecedented capabilities. The barriers to translating such devices in the complex and understandably heavily regulated surgical and interventional environment can seem daunting. Whether we leave the translation task mostly to our industrial partners or welcome, as researchers, an important share of it is up to us. We argue that embracing the complexity of surgical and interventional sciences is mandatory to the evolution of the field. Being able to do so requires large-scale infrastructure and a critical mass of expertise that very few research centres have. In this paper, we emphasise the need for a holistic approach to computer-assisted interventions where clinical, scientific, engineering and regulatory expertise are combined as a means of moving towards clinical impact. To ensure that the breadth of infrastructure and expertise required for translational computer-assisted intervention research does not lead to a situation where the field advances only thanks to a handful of exceptionally large research centres, we also advocate that solutions need to be designed to lower the barriers to entry. Inspired by fields such as particle physics and astronomy, we claim that centralised very large innovation centres with state of the art technology and health technology assessment capabilities backed by core support staff and open
Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.
2013-05-01
In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be
Towards the next generation of simplified Dark Matter models
Albert, Andreas
This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. Based on two concrete examples, we show how existing simplified DM models (SDMM) can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. This discovery scenario uses the recently observed excess in the high-mass diphoton searches of...
Malmberg, T.
1986-08-01
Within the context of the stability analysis of the cryostat of a fusion reactor the question was raised whether or not the rather lengthy conventional stability analysis can be circumvented by applying a simplified strategy based on common linear Finite Element computer programs. This strategy involves the static linear deformation analysis of the structure with and without imperfections. For some simple stability problems this approach has been shown to be successful. The purpose of this study is to derive a general proof of the validity of this approach for thin shells with arbitrary geometry under hydrostatic pressure or dead loading along the boundary. This general assessment involves two types of analyses: 1) A general stability analysis for thin shells; this is based on a simple nonlinear shell theory and a stability criterion in form of the neutral (indifferent) equilibrium condition. This result is taken as reference solution. 2) A general linear deformation analysis for thin imperfect shells and the definition of a suitable scalar parameter (β-parameter) which should represent the reciprocal of the critical load factor. It is shown that the simplified strategy (=β-parameter approach'') generally is not capable to predict the actual critical load factor irrespective whether there is a hydrostatic pressure loading or dead loading along the edge of the shell. This general result is in contrast to the observations made for some simple stability problems. Nevertheless, the results of this study do not exclude the possibility that the simplified strategy will give reasonable approximate solutions at least for a restricted class of stability problems. (orig./HP) [de
Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.
Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian
2017-05-01
Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.