Computing Equilibrium Chemical Compositions
Mcbride, Bonnie J.; Gordon, Sanford
1995-01-01
Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.
International Nuclear Information System (INIS)
Broyd, T.W.
1988-01-01
A brief review of two recent benchmark exercises is presented. These were separately concerned with the equilibrium chemistry of groundwater and the geosphere migration of radionuclides, and involved the use of a total of 19 computer codes by 11 organisations in Europe and Canada. A similar methodology was followed for each exercise, in that series of hypothetical test cases were used to explore the limits of each code's application, and so provide an overview of current modelling potential. Aspects of the user-friendliness of individual codes were also considered. The benchmark studies have benefited participating organisations by providing a means of verifying current codes, and have provided problem data sets by which future models may be compared. (author)
Computational methods for reversed-field equilibrium
International Nuclear Information System (INIS)
Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.
1980-01-01
Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described
Computation of Phase Equilibrium and Phase Envelopes
DEFF Research Database (Denmark)
Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp
formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We...
Teaching Chemical Equilibrium with the Jigsaw Technique
Doymus, Kemal
2008-03-01
This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students’ understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes was randomly assigned as the non-jigsaw group (control) and other as the jigsaw group (cooperative). Students participating in the jigsaw group were divided into four “home groups” since the topic chemical equilibrium is divided into four subtopics (Modules A, B, C and D). Each of these home groups contained four students. The groups were as follows: (1) Home Group A (HGA), representin g the equilibrium state and quantitative aspects of equilibrium (Module A), (2) Home Group B (HGB), representing the equilibrium constant and relationships involving equilibrium constants (Module B), (3) Home Group C (HGC), representing Altering Equilibrium Conditions: Le Chatelier’s principle (Module C), and (4) Home Group D (HGD), representing calculations with equilibrium constants (Module D). The home groups then broke apart, like pieces of a jigsaw puzzle, and the students moved into jigsaw groups consisting of members from the other home groups who were assigned the same portion of the material. The jigsaw groups were then in charge of teaching their specific subtopic to the rest of the students in their learning group. The main data collection tool was a Chemical Equilibrium Achievement Test (CEAT), which was applied to both the jigsaw and non-jigsaw groups The results indicated that the jigsaw group was more successful than the non-jigsaw group (individual learning method).
Teaching Chemical Equilibrium with the Jigsaw Technique
Doymus, Kemal
2008-01-01
This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students' understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes…
Computational studies in tokamak equilibrium and transport
International Nuclear Information System (INIS)
Braams, B.J.
1986-01-01
This thesis is concerned with some problems arising in the magnetic confinement approach to controlled thermonuclear fusion. The work address the numerical modelling of equilibrium and transport properties of a confined plasma and the interpretation of experimental data. The thesis is divided in two parts. Part 1 is devoted to some aspects of the MHD equilibrium problem, both in the 'direct' formulation (given an equation for the plasma current, the corresponding equilibrium is to be determined) and in the 'inverse' formulation (the interpretation of measurements at the plasma edge). Part 2 is devoted to numerical studies of the edge plasma. The appropriate Navier-Stokes system of fluid equations is solved in a two-dimensional geometry. The main interest of this work is to develop an understanding of particle and energy transport in the scrape-off layer and onto material boundaries, and also to contribute to the conceptual design of the NET/INTOR tokamak reactor experiment. (Auth.)
Computing Properties Of Chemical Mixtures At Equilibrium
Mcbride, B. J.; Gordon, S.
1995-01-01
Scientists and engineers need data on chemical equilibrium compositions to calculate theoretical thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93 is general program that calculates chemical equilibrium compositions and properties of mixtures for any chemical system for which thermodynamic data are available. Includes thermodynamic data for more than 1,300 gaseous and condensed species and thermal-transport data for 151 gases. Written in FORTRAN 77.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Computer simulations of equilibrium magnetization and microstructure in magnetic fluids
Rosa, A. P.; Abade, G. C.; Cunha, F. R.
2017-09-01
In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole
Equilibrium gas-oil ratio measurements using a microfluidic technique.
Fisher, Robert; Shah, Mohammad Khalid; Eskin, Dmitry; Schmidt, Kurt; Singh, Anil; Molla, Shahnawaz; Mostowfi, Farshid
2013-07-07
A method for measuring the equilibrium GOR (gas-oil ratio) of reservoir fluids using microfluidic technology is developed. Live crude oils (crude oil with dissolved gas) are injected into a long serpentine microchannel at reservoir pressure. The fluid forms a segmented flow as it travels through the channel. Gas and liquid phases are produced from the exit port of the channel that is maintained at atmospheric conditions. The process is analogous to the production of crude oil from a formation. By using compositional analysis and thermodynamic principles of hydrocarbon fluids, we show excellent equilibrium between the produced gas and liquid phases is achieved. The GOR of a reservoir fluid is a key parameter in determining the equation of state of a crude oil. Equations of state that are commonly used in petroleum engineering and reservoir simulations describe the phase behaviour of a fluid at equilibrium state. Therefore, to accurately determine the coefficients of an equation of state, the produced gas and liquid phases have to be as close to the thermodynamic equilibrium as possible. In the examples presented here, the GORs measured with the microfluidic technique agreed with GOR values obtained from conventional methods. Furthermore, when compared to conventional methods, the microfluidic technique was simpler to perform, required less equipment, and yielded better repeatability.
Computer Assisted Audit Techniques
Directory of Open Access Journals (Sweden)
Eugenia Iancu
2007-01-01
Full Text Available From the modern point of view, audit takes intoaccount especially the information systems representingmainly the examination performed by a professional asregards the manner for developing an activity by means ofcomparing it to the quality criteria specific to this activity.Having as reference point this very general definition ofauditing, it must be emphasized that the best known segmentof auditing is the financial audit that had a parallel evolutionto the accountancy one.The present day phase of developing the financial audithas as main trait the internationalization of the accountantprofessional. World wide there are multinational companiesthat offer services in the financial auditing, taxing andconsultancy domain. The auditors, natural persons and auditcompanies, take part at the works of the national andinternational authorities for setting out norms in theaccountancy and auditing domain.The computer assisted audit techniques can be classified inseveral manners according to the approaches used by theauditor. The most well-known techniques are comprised inthe following categories: testing data techniques, integratedtest, parallel simulation, revising the program logics,programs developed upon request, generalized auditsoftware, utility programs and expert systems.
Computing Nash Equilibrium in Wireless Ad Hoc Networks
DEFF Research Database (Denmark)
Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.
2012-01-01
This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...
Computer program determines chemical composition of physical system at equilibrium
Kwong, S. S.
1966-01-01
FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.
Gordon, Sanford; Mcbride, Bonnie J.
1994-01-01
This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.
Computation of thermodynamic equilibrium in systems under stress
Vrijmoed, Johannes C.; Podladchikov, Yuri Y.
2016-04-01
Metamorphic reactions may be partly controlled by the local stress distribution as suggested by observations of phase assemblages around garnet inclusions related to an amphibolite shear zone in granulite of the Bergen Arcs in Norway. A particular example presented in fig. 14 of Mukai et al. [1] is discussed here. A garnet crystal embedded in a plagioclase matrix is replaced on the left side by a high pressure intergrowth of kyanite and quartz and on the right side by chlorite-amphibole. This texture apparently represents disequilibrium. In this case, the minerals adapt to the low pressure ambient conditions only where fluids were present. Alternatively, here we compute that this particular low pressure and high pressure assemblage around a stressed rigid inclusion such as garnet can coexist in equilibrium. To do the computations we developed the Thermolab software package. The core of the software package consists of Matlab functions that generate Gibbs energy of minerals and melts from the Holland and Powell database [2] and aqueous species from the SUPCRT92 database [3]. Most up to date solid solutions are included in a general formulation. The user provides a Matlab script to do the desired calculations using the core functions. Gibbs energy of all minerals, solutions and species are benchmarked versus THERMOCALC, PerpleX [4] and SUPCRT92 and are reproduced within round off computer error. Multi-component phase diagrams have been calculated using Gibbs minimization to benchmark with THERMOCALC and Perple_X. The Matlab script to compute equilibrium in a stressed system needs only two modifications of the standard phase diagram script. Firstly, Gibbs energy of phases considered in the calculation is generated for multiple values of thermodynamic pressure. Secondly, for the Gibbs minimization the proportion of the system at each particular thermodynamic pressure needs to be constrained. The user decides which part of the stress tensor is input as thermodynamic
Pharmaceutical industry and trade liberalization using computable general equilibrium model.
Barouni, M; Ghaderi, H; Banouei, Aa
2012-01-01
Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.
Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.
Heald, Emerson F.
1978-01-01
Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)
HINT computation of LHD equilibrium with zero rotational transform surface
International Nuclear Information System (INIS)
Kanno, Ryutaro; Toi, Kazuo; Watanabe, Kiyomasa; Hayashi, Takaya; Miura, Hideaki; Nakajima, Noriyoshi; Okamoto Masao
2004-01-01
A Large Helical Device equilibrium having a zero rotational transform surface is studied by using the three dimensional MHD equilibrium code, HINT. We find existence of the equilibrium but with formation of the two or three n=0 islands composing a homoclinic-type structure near the center, where n is a toroidal mode number. The LHD equilibrium maintains the structure, when the equilibrium beta increases. (author)
Higher-order techniques in computational electromagnetics
Graglia, Roberto D
2016-01-01
Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.
Computable general equilibrium model fiscal year 2013 capability development report
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-17
This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.
Directory of Open Access Journals (Sweden)
Vahid Dadashi
2016-02-01
Full Text Available Abstract This paper is dedicated to the introduction a new class of equilibrium problems named generalized multivalued equilibrium-like problems which includes the classes of hemiequilibrium problems, equilibrium-like problems, equilibrium problems, hemivariational inequalities, and variational inequalities as special cases. By utilizing the auxiliary principle technique, some new predictor-corrector iterative algorithms for solving them are suggested and analyzed. The convergence analysis of the proposed iterative methods requires either partially relaxed monotonicity or jointly pseudomonotonicity of the bifunctions involved in generalized multivalued equilibrium-like problem. Results obtained in this paper include several new and known results as special cases.
Computer animation algorithms and techniques
Parent, Rick
2012-01-01
Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ
A rapid method for the computation of equilibrium chemical composition of air to 15000 K
Prabhu, Ramadas K.; Erickson, Wayne D.
1988-01-01
A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.
Vermorel, Romain; Oulebsir, Fouad; Galliero, Guillaume
2017-09-14
The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.
Computing diffusivities from particle models out of equilibrium
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
Computation of hypersonic axisymmetric flows of equilibrium gas over blunt bodies
International Nuclear Information System (INIS)
Hejranfar, K.; Esfahanian, V.; Moghadam, R.K.
2005-01-01
An appropriate combination of the thin-layer Navier-Stokes (TLNS) and parabolized Navier-Stokes (PNS) solvers is used to accurately and efficiently compute hypersonic flowfields of equilibrium air around blunt-body configurations. The TLNS equations are solved in the nose region to provide the initial data plane needed for the solution of the PNS equations. Then the PNS equations are employed to efficiently compute the flowfield for the afterbody region by using a space marching procedure. Both the TLNS and the PNS equations are numerically solved by using the implicit non-iterative finite-difference algorithm of Beam and Warming. A shock fitting technique is used in both the TLNS and PNS codes to obtain accurate solution in the vicinity of the shock. To validate the results of the developed TLNS code, hypersonic laminar flow over a sphere at Mach number of 11.26 is computed. To demonstrate the accuracy and efficiency of using the present TLNS-PNS methodology, the computations are performed for hypersonic flow over 5 o long slender blunt cone at Mach number of 19.25. The results of these computations are found to be in good agreement with available numerical and experimental data. The effects of real gas on the flowfield characteristics are also studied in both the TLNS and PNS solutions. (author)
Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics
International Nuclear Information System (INIS)
Sarovar, Mohan; Young, Kevin C
2013-01-01
While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)
The technique for calculation of equilibrium in heterogeneous systems of the InP-GaP-HCl type
International Nuclear Information System (INIS)
Voronin, V.A.; Prokhorov, V.A.; Goliusov, V.A.; Chuchmarev, S.K.
1983-01-01
Technique for calculation of equilibrium in heterogeneous systems based on A 1 3 B 5 -A 2 3 B 5 solid solutions implying the use of structural-topological models of chemical equilibrium in the investigated systems, is developed. Chemical equilibrium in the InP-GaP-HCl systems is analyzed by means of the suggested technique and the equilibrium composition of the gas phase is calculated
Numerical computation of FCT equilibria by inverse equilibrium method
International Nuclear Information System (INIS)
Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki
1986-11-01
FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)
International Nuclear Information System (INIS)
Barquin, J.; Centeno, E.; Reneses, J.
2004-01-01
The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)
Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion
Erickson, W. D.; Prabhu, R. K.
1986-01-01
A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.
Grid computing techniques and applications
Wilkinson, Barry
2009-01-01
''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010
Soft computing techniques in engineering applications
Zhong, Baojiang
2014-01-01
The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.
Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models
Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.
2016-01-01
A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of
Performing an Environmental Tax Reform in a regional Economy. A Computable General Equilibrium
Andre, F.J.; Cardenete, M.A.; Velazquez, E.
2003-01-01
We use a Computable General Equilibrium model to simulate the effects of an Environmental Tax Reform in a regional economy (Andalusia, Spain).The reform involves imposing a tax on CO2 or SO2 emissions and reducing either the Income Tax or the payroll tax of employers to Social Security, and
Equilibrium chemical reaction of supersonic hydrogen-air jets (the ALMA computer program)
Elghobashi, S.
1977-01-01
The ALMA (axi-symmetrical lateral momentum analyzer) program is concerned with the computation of two dimensional coaxial jets with large lateral pressure gradients. The jets may be free or confined, laminar or turbulent, reacting or non-reacting. Reaction chemistry is equilibrium.
Statistical and Computational Techniques in Manufacturing
2012-01-01
In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...
Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games
Directory of Open Access Journals (Sweden)
Victoria Lozan
2013-10-01
Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.
What is the real role of the equilibrium phase in abdominal computed tomography?
Energy Technology Data Exchange (ETDEWEB)
Salvadori, Priscila Silveira [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Costa, Danilo Manuel Cerqueira; Romano, Ricardo Francisco Tavares; Galvao, Breno Vitor Tomaz; Monjardim, Rodrigo da Fonseca; Bretas, Elisa Almeida Sathler; Rios, Lucas Torres; Shigueoka, David Carlos; Caldana, Rogerio Pedreschi; D' Ippolito, Giuseppe, E-mail: giuseppe_dr@uol.com.br [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Department of Diagnostic Imaging
2013-03-15
Objective: To evaluate the role of the equilibrium phase in abdominal computed tomography. Materials and Methods: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis), and a second reading of these phases added to the equilibrium phase (second analysis). At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher's test was utilized to evaluate the changes in the primary diagnosis (p < 0.05 as significant). Results: Among the 219 cases reviewed, the absence of the equilibrium phase determined change in the primary diagnosis in only one case (0.46%; p > 0.999). As regards secondary diagnoses, changes after the second analysis were observed in five cases (2.3%). Conclusion: For clinical scenarios such as cancer staging, acute abdomen and investigation for abdominal collections, the equilibrium phase is dispensable and does not offer any significant diagnostic contribution. (author)
Computing a quasi-perfect equilibrium of a two-player game
DEFF Research Database (Denmark)
Miltersen, Peter Bro; Sørensen, Troels Bjerre
2010-01-01
Refining an algorithm due to Koller, Megiddo and von Stengel, we show how to apply Lemke's algorithm for solving linear complementarity programs to compute a quasi-perfect equilibrium in behavior strategies of a given two-player extensive-form game of perfect recall. A quasi-perfect equilibrium...... of a zero-sum game, we devise variants of the algorithm that rely on linear programming rather than linear complementarity programming and use the simplex algorithm or other algorithms for linear programming rather than Lemke's algorithm. We argue that these latter algorithms are relevant for recent...
International Nuclear Information System (INIS)
Pratt, L.R.; Haan, S.W.
1981-01-01
An exact formal theory for the effects of periodic boundary conditions on the equilibrium properties of computer simulated classical many-body systems is developed. This is done by observing that use of the usual periodic conditions is equivalent to the study of a certain supermolecular liquid, in which a supermolecule is a polyatomic molecule of infinite extent composed of one of the physical particles in the system plus all its periodic images. For this supermolecular system in the grand ensemble, all the cluster expansion techniques used in the study of real molecular liquids are directly applicable. As expected, particle correlations are translationally uniform, but explicitly anisotropic. When the intermolecular potential energy functions are of short enough range, or cut off, so that the minimum image method is used, evaluation of the cluster integrals is dramatically simplified. In this circumstance, a large and important class of cluster expansion contributions can be summed exactly, and expressed in terms of the correlation functions which result when the system size is allowed to increase without bound. This result yields a simple and useful approximation to the corrections to the particle correlations due to the use of periodic boundary conditions with finite systems. Numerical application of these results are reported in the following paper
Gordon, S.; Mcbride, B. J.
1976-01-01
A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.
New coding technique for computer generated holograms.
Haskell, R. E.; Culver, B. C.
1972-01-01
A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.
Energy Technology Data Exchange (ETDEWEB)
Park, Ik Kyu; Cho, Heong Kyu; Kim, Jong Tae; Yoon, Han Young; Jeong, Jae Jun
2007-12-15
A computational model for transient, 3 dimensional 2 phase flows was developed by using 'unstructured-FVM-based, non-staggered, semi-implicit numerical scheme' considering the thermally non-equilibrium droplets. The assumption of the thermally equilibrium between liquid and droplets of previous studies was not used any more, and three energy conservation equations for vapor, liquid, liquid droplets were set up. Thus, 9 conservation equations for mass, momentum, and energy were established to simulate 2 phase flows. In this report, the governing equations and a semi-implicit numerical sheme for a transient 1 dimensional 2 phase flows was described considering the thermally non-equilibrium between liquid and liquid droplets. The comparison with the previous model considering the thermally non-equilibrium between liquid and liquid droplets was also reported.
Computational techniques of the simplex method
Maros, István
2003-01-01
Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
Why Enforcing its UNCAC Commitments Would be Good for Russia: A Computable General Equilibrium Model
Directory of Open Access Journals (Sweden)
Michael P. BARRY
2010-05-01
Full Text Available Russia has ratified the UN Convention Against Corruption but has not successfully enforced it. This paper uses updated GTAP data to reconstruct a computable general equilibrium (CGE model to quantify the macroeconomic effects of corruption in Russia. Corruption is found to cost the Russian economy billions of dollars a year. A conclusion of the paper is that implementing and enforcing the UNCAC would be of significant economic benefit to Russia and its people.
Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu
2016-01-01
To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...
Can Migrants Save Greece From Ageing? A Computable General Equilibrium Approach Using G-AMOS.
Nikos Pappas
2008-01-01
The population of Greece is projected to age in the course of the next three decades. This paper combines demographic projections with a multi-period economic Computable General Equilibrium (CGE) modelling framework to assess the macroeconomic impact of these future demographic trends. The simulation strategy adopted in Lisenkova et. al. (2008) is also employed here. The size and age composition of the population in the future depends on current and future values of demographic parameters suc...
Computational intelligence techniques in health care
Zhou, Wengang; Satheesh, P
2016-01-01
This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.
An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)
Pratt, B. S.; Pratt, D. T.
1984-01-01
A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.
Comparison of radiographic technique by computer simulation
International Nuclear Information System (INIS)
Brochi, M.A.C.; Ghilardi Neto, T.
1989-01-01
A computational algorithm to compare radiographic techniques (KVp, mAs and filters) is developed based in the fixation of parameters that defines the images, such as optical density and constrast. Before the experience, the results were used in a radiography of thorax. (author) [pt
Application of computer technique in SMCAMS
International Nuclear Information System (INIS)
Lu Deming
2001-01-01
A series of applications of computer technique in SMCAMS physics design and magnetic field measurement is described, including digital calculation of electric-magnetic field, beam dynamics, calculation of beam injection and extraction, and mapping and shaping of the magnetic field
Approximate Computing Techniques for Iterative Graph Algorithms
Energy Technology Data Exchange (ETDEWEB)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram
2017-12-18
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.
Practical techniques for pediatric computed tomography
International Nuclear Information System (INIS)
Fitz, C.R.; Harwood-Nash, D.C.; Kirks, D.R.; Kaufman, R.A.; Berger, P.E.; Kuhn, J.P.; Siegel, M.J.
1983-01-01
Dr. Donald Kirks has assembled this section on Practical Techniques for Pediatric Computed Tomography. The material is based on a presentation in the Special Interest session at the 25th Annual Meeting of the Society for Pediatric Radiology in New Orleans, Louisiana, USA in 1982. Meticulous attention to detail and technique is required to ensure an optimal CT examination. CT techniques specifically applicable to infants and children have not been disseminated in the radiology literature and in this respect it may rightly be observed that ''the child is not a small adult''. What follows is a ''cookbook'' prepared by seven participants and it is printed in Pediatric Radiology, in outline form, as a statement of individual preferences for pediatric CT techniques. This outline gives concise explanation of techniques and permits prompt dissemination of information. (orig.)
Operator support system using computational intelligence techniques
Energy Technology Data Exchange (ETDEWEB)
Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)
Operator support system using computational intelligence techniques
International Nuclear Information System (INIS)
Bueno, Elaine Inacio; Pereira, Iraci Martinez
2015-01-01
Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)
Computed Radiography: An Innovative Inspection Technique
International Nuclear Information System (INIS)
Klein, William A.; Councill, Donald L.
2002-01-01
Florida Power and Light Company's (FPL) Nuclear Division combined two diverse technologies to create an innovative inspection technique, Computed Radiography, that improves personnel safety and unit reliability while reducing inspection costs. This technique was pioneered in the medical field and applied in the Nuclear Division initially to detect piping degradation due to flow-accelerated corrosion. Component degradation can be detected by this additional technique. This approach permits FPL to reduce inspection costs, perform on line examinations (no generation curtailment), and to maintain or improve both personnel safety and unit reliability. Computed Radiography is a very versatile tool capable of other uses: - improving the external corrosion program by permitting inspections underneath insulation, and - diagnosing system and component problems such as valve positions, without the need to shutdown or disassemble the component. (authors)
International Nuclear Information System (INIS)
Lima da Silva, Aline; Heck, Nestor Cesar
2003-01-01
Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron
Computational Intelligence Techniques for New Product Design
Chan, Kit Yan; Dillon, Tharam S
2012-01-01
Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...
Computing multi-species chemical equilibrium with an algorithm based on the reaction extents
DEFF Research Database (Denmark)
Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.
2013-01-01
-negative constrains. The residual function, representing the distance to the equilibrium, is defined from the chemical potential (or Gibbs energy) of the chemical system. Local minimums are potentially avoided by the prioritization of the aqueous reactions with respect to the heterogeneous reactions. The formation......A mathematical model for the solution of a set of chemical equilibrium equations in a multi-species and multiphase chemical system is described. The computer-aid solution of model is achieved by means of a Newton-Raphson method enhanced with a line-search scheme, which deals with the non...... and release of gas bubbles is taken into account in the model, limiting the concentration of volatile aqueous species to a maximum value, given by the gas solubility constant.The reaction extents are used as state variables for the numerical method. As a result, the accepted solution satisfies the charge...
New computing techniques in physics research
International Nuclear Information System (INIS)
Becks, Karl-Heinz; Perret-Gallix, Denis
1994-01-01
New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking
A new algorithm to compute conjectured supply function equilibrium in electricity markets
International Nuclear Information System (INIS)
Diaz, Cristian A.; Villar, Jose; Campos, Fco Alberto; Rodriguez, M. Angel
2011-01-01
Several types of market equilibria approaches, such as Cournot, Conjectural Variation (CVE), Supply Function (SFE) or Conjectured Supply Function (CSFE) have been used to model electricity markets for the medium and long term. Among them, CSFE has been proposed as a generalization of the classic Cournot. It computes the equilibrium considering the reaction of the competitors against changes in their strategy, combining several characteristics of both CVE and SFE. Unlike linear SFE approaches, strategies are linearized only at the equilibrium point, using their first-order Taylor approximation. But to solve CSFE, the slope or the intercept of the linear approximations must be given, which has been proved to be very restrictive. This paper proposes a new algorithm to compute CSFE. Unlike previous approaches, the main contribution is that the competitors' strategies for each generator are initially unknown (both slope and intercept) and endogenously computed by this new iterative algorithm. To show the applicability of the proposed approach, it has been applied to several case examples where its qualitative behavior has been analyzed in detail. (author)
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Stabilization of emission of CO2: A computable general equilibrium assessment
International Nuclear Information System (INIS)
Glomsroed, S.; Vennemo, H.; Johnsen, T.
1992-01-01
A multisector computable general equilibrium model is used to study economic development perspectives in Norway if CO 2 emissions were stabilized. The effects discussed include impacts on main macroeconomic indicators and economic growth, sectoral allocation of production, and effects on the market for energy. The impact of other pollutants than CO 2 on emissions is assessed along with the related impact on noneconomic welfare. The results indicate that CO 2 emissions might be stabilized in Norway without dramatically reducing economic growth. Sectoral allocation effects are much larger. A substantial reduction in emissions to air other than CO 2 is found, yielding considerable gains in noneconomic welfare. 25 refs., 6 tabs., 2 figs
Evolutionary computation techniques a comparative perspective
Cuevas, Erik; Oliva, Diego
2017-01-01
This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Computer technique for evaluating collimator performance
International Nuclear Information System (INIS)
Rollo, F.D.
1975-01-01
A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented
Bayer Digester Optimization Studies using Computer Techniques
Kotte, Jan J.; Schleider, Victor H.
Theoretically required heat transfer performance by the multistaged flash heat reclaim system of a high pressure Bayer digester unit is determined for various conditions of discharge temperature, excess flash vapor and indirect steam addition. Solution of simultaneous heat balances around the digester vessels and the heat reclaim system yields the magnitude of available heat for representation of each case on a temperature-enthalpy diagram, where graphical fit of the number of flash stages fixes the heater requirements. Both the heat balances and the trial-and-error graphical solution are adapted to solution by digital computer techniques.
Measuring techniques in emission computed tomography
International Nuclear Information System (INIS)
Jordan, K.; Knoop, B.
1988-01-01
The chapter reviews the historical development of the emission computed tomography and its basic principles, proceeds to SPECT and PET, special techniques of emission tomography, and concludes with a comprehensive discussion of the mathematical fundamentals of the reconstruction and the quantitative activity determination in vivo, dealing with radon transformation and the projection slice theorem, methods of image reconstruction such as analytical and algebraic methods, limiting conditions in real systems such as limited number of measured data, noise enhancement, absorption, stray radiation, and random coincidence. (orig./HP) With 111 figs., 6 tabs [de
Mathematics in computed tomography and related techniques
International Nuclear Information System (INIS)
Sawicka, B.
1992-01-01
The mathematical basis of computed tomography (CT) was formulated in 1917 by Radon. His theorem states that the 2-D function f(x,y) can be determined at all points from a complete set of its line integrals. Modern methods of image reconstruction include three approaches: algebraic reconstruction techniques with simultaneous iterative reconstruction or simultaneous algebraic reconstruction; convolution back projection; and the Fourier transform method. There is no one best approach. Because the experimental data do not strictly satisfy theoretical models, a number of effects have to be taken into account; in particular, the problems of beam geometry, finite beam dimensions and distribution, beam scattering, and the radiation source spectrum. Tomography with truncated data is of interest, employing mathematical approximations to compensate for the unmeasured projection data. Mathematical techniques in image processing and data analysis are also extensively used. 13 refs
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan
International Nuclear Information System (INIS)
Naqvi, Farzana
1997-01-01
In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)
Czech Academy of Sciences Publication Activity Database
Červinka, Michal
2010-01-01
Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf
Soil-water characteristics of Gaomiaozi bentonite by vapour equilibrium technique
Directory of Open Access Journals (Sweden)
Wenjing Sun
2014-02-01
Full Text Available Soil-water characteristics of Gaomiaozi (GMZ Ca-bentonite at high suctions (3–287 MPa are measured by vapour equilibrium technique. The soil-water retention curve (SWRC of samples with the same initial compaction states is obtained in drying and wetting process. At high suctions, the hysteresis behaviour is not obvious in relationship between water content and suction, while the opposite holds between degree of saturation and suction. The suction variation can change its water retention behaviour and void ratio. Moreover, changes of void ratio can bring about changes in degree of saturation. Therefore, the total change in degree of saturation includes changes caused by suction and that by void ratio. In the space of degree of saturation and suction, the SWRC at constant void ratio shifts to the direction of higher suctions with decreasing void ratio. However, the relationship between water content and suction is less affected by changes of void ratio. The degree of saturation decreases approximately linearly with increasing void ratio at a constant suction. Moreover, the slope of the line decreases with increasing suction and they show an approximately linear relationship in semi-logarithmical scale. From this linear relationship, the variation of degree of saturation caused by the change in void ratio can be obtained. Correspondingly, SWRC at a constant void ratio can be determined from SWRC at different void ratios.
Njoya, Eric Tchouamou; Seetaram, Neelu
2017-01-01
The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor. PMID:29595836
Energy Technology Data Exchange (ETDEWEB)
Stephan, G.; Van Nieuwkoop, R.; Wiedmer, T. (Institute for Applied Microeconomics, Univ. of Bern (Switzerland))
1992-01-01
Both distributional and allocational effects of limiting carbon dioxide emissions in a small and open economy are discussed. It starts from the assumption that Switzerland attempts to stabilize its greenhouse gas emissions over the next 25 years, and evaluates costs and benefits of the respective reduction programme. From a methodological viewpoint, it is illustrated how a computable general equilibrium approach can be adopted for identifying economic effects of cutting greenhouse gas emissions on the national level. From a political economy point of view it considers the social incidence of a greenhouse policy. It shows in particular that public acceptance can be increased and economic costs of greenhouse policies can be reduced, if carbon taxes are accompanied by revenue redistribution. 8 tabs., 1 app., 17 refs.
Zero-rating food in South Africa: A computable general equilibrium analysis
Directory of Open Access Journals (Sweden)
M Kearney
2004-04-01
Full Text Available Zero-rating food is considered to alleviate poverty of poor households who spend the largest proportion of their income on food. However, this will result in a loss of revenue for government. A Computable General Equilibrium (CGE model is used to analyze the combined effects on zero-rating food and using alternative revenue sources to compensate for the loss in revenue. To prohibit excessively high increases in the statutory VAT rates of business and financial services, increasing direct taxes or increasing VAT to 16 per cent, is investigated. Increasing direct taxes is the most successful option when creating a more progressive tax structure, and still generating a positive impact on GDP. The results indicate that zero-rating food combined with a proportional percentage increase in direct taxes can improve the welfare of poor households.
Energy Technology Data Exchange (ETDEWEB)
Boero, Riccardo [Los Alamos National Laboratory; Edwards, Brian Keith [Los Alamos National Laboratory
2017-08-07
Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying the event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.
International Nuclear Information System (INIS)
Qin Changbo; Jia Yangwen; Wang Hao; Bressers, Hans T A; Su, Z
2011-01-01
In this letter, we apply an extended environmental dynamic computable general equilibrium model to assess the economic consequences of implementing a total emission control policy. On the basis of emission levels in 2007, we simulate different emission reduction scenarios, ranging from 20 to 50% emission reduction, up to the year 2020. The results indicate that a modest total emission reduction target in 2020 can be achieved at low macroeconomic cost. As the stringency of policy targets increases, the macroeconomic cost will increase at a rate faster than linear. Implementation of a tradable emission permit system can counterbalance the economic costs affecting the gross domestic product and welfare. We also find that a stringent environmental policy can lead to an important shift in production, consumption and trade patterns from dirty sectors to relatively clean sectors.
Njoya, Eric Tchouamou; Seetaram, Neelu
2018-04-01
The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor.
HUBBLE-BUBBLE 1. A computer program for the analysis of non-equilibrium flows of water
International Nuclear Information System (INIS)
Mather, D.J.
1978-02-01
A description is given of the computer program HUBBLE-BUBBLE I which simulates the non-equilibrium flow of water and steam in a pipe. The code is designed to examine the transient flow developing in a pipe containing hot compressed water following the rupture of a retaining diaphragm. Allowance is made for an area change in the pipe. Particular attention is paid to the non-equilibrium development of vapour bubbles and to the transition from a bubble-liquid regime to a droplet-vapour regime. The mathematical and computational model is described together with a summary of the FORTRAN subroutines and listing of data input. (UK)
International Nuclear Information System (INIS)
Rouet, J.L.; Feix, M.R.
1996-01-01
The test particle picture is a central theory of weakly correlated plasma. While experiments and computer experiments have confirmed the validity of this theory at thermal equilibrium, the extension to meta-equilibrium distributions presents interesting and intriguing points connected to the under or over-population of the tail of these distributions (high velocity) which have not yet been tested. Moreover, the general dynamical Debye cloud (which is a generalization of the static Debye cloud supposing a plasma at thermal equilibrium and a test particle of zero velocity) for any test particle velocity and three typical velocity distributions (equilibrium plus two meta-equilibriums) are presented. The simulations deal with a one-dimensional two-component plasma and, moreover, the relevance of the check for real three-dimensional plasma is outlined. Two kinds of results are presented: the dynamical cloud itself and the more usual density (or energy) fluctuation spectrums. Special attention is paid to the behavior of long wavelengths which needs long systems with very small graininess effects and, consequently, sizable computation efforts. Finally, the divergence or absence of energy in the small wave numbers connected to the excess or lack of fast particles of the two above mentioned meta-equilibrium is exhibited. copyright 1996 American Institute of Physics
Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)
2014-04-01
This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.
Parallel computing techniques for rotorcraft aerodynamics
Ekici, Kivanc
The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).
Schu, Kathryn L.
Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements
New computing techniques in physics research
International Nuclear Information System (INIS)
Perret-Gallix, D.; Wojcik, W.
1990-01-01
These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes
Glass, Christopher E.
1990-08-01
The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Kahsay, T.N.; Kuik, O.J.; Brouwer, R.; van der Zaag, P.
2015-01-01
Employing a multi-region multi-sector computable general equilibrium (CGE) modeling framework, this study estimates the direct and indirect economic impacts of the Grand Ethiopian Renaissance Dam (GERD) on the Eastern Nile economies. The study contributes to the existing literature by evaluating the
Computable general equilibrium modelling in the context of trade and environmental policy
Energy Technology Data Exchange (ETDEWEB)
Koesler, Simon Tobias
2014-10-14
This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.
China’s Rare Earths Supply Forecast in 2025: A Dynamic Computable General Equilibrium Analysis
Directory of Open Access Journals (Sweden)
Jianping Ge
2016-09-01
Full Text Available The supply of rare earths in China has been the focus of significant attention in recent years. Due to changes in regulatory policies and the development of strategic emerging industries, it is critical to investigate the scenario of rare earth supplies in 2025. To address this question, this paper constructed a dynamic computable equilibrium (DCGE model to forecast the production, domestic supply, and export of China’s rare earths in 2025. Based on our analysis, production will increase by 10.8%–12.6% and achieve 116,335–118,260 tons of rare-earth oxide (REO in 2025, based on recent extraction control during 2011–2016. Moreover, domestic supply and export will be 75,081–76,800 tons REO and 38,797–39,400 tons REO, respectively. The technological improvements on substitution and recycling will significantly decrease the supply and mining activities of rare earths. From a policy perspective, we found that the elimination of export regulations, including export quotas and export taxes, does have a negative impact on China’s future domestic supply of rare earths. The policy conflicts between the increase in investment in strategic emerging industries, and the increase in resource and environmental taxes on rare earths will also affect China’s rare earths supply in the future.
International Nuclear Information System (INIS)
Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.
2006-01-01
In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)
The Computation of Nash Equilibrium in Fashion Games via Semi-Tensor Product Method
Institute of Scientific and Technical Information of China (English)
GUO Peilian; WANG Yuzhen
2016-01-01
Using the semi-tensor product of matrices,this paper investigates the computation of pure-strategy Nash equilibrium (PNE) for fashion games,and presents several new results.First,a formal fashion game model on a social network is given.Second,the utility function of each player is converted into an algebraic form via the semi-tensor product of matrices,based on which the case of two-strategy fashion game is studied and two methods are obtained for the case to verify the existence of PNE.Third,the multi-strategy fashion game model is investigated and an algorithm is established to find all the PNEs for the general case.Finally,two kinds of optimization problems,that is,the so-called social welfare and normalized satisfaction degree optimization problems are investigated and two useful results are given.The study of several illustrative examples shows that the new results obtained in this paper are effective.
Computable general equilibrium models for sustainability impact assessment: Status quo and prospects
International Nuclear Information System (INIS)
Boehringer, Christoph; Loeschel, Andreas
2006-01-01
Sustainability Impact Assessment (SIA) of economic, environmental, and social effects triggered by governmental policies has become a central requirement for policy design. The three dimensions of SIA are inherently intertwined and subject to trade-offs. Quantification of trade-offs for policy decision support requires numerical models in order to assess systematically the interference of complex interacting forces that affect economic performance, environmental quality, and social conditions. This paper investigates the use of computable general equilibrium (CGE) models for measuring the impacts of policy interference on policy-relevant economic, environmental, and social (institutional) indicators. We find that operational CGE models used for energy-economy-environment (E3) analyses have a good coverage of central economic indicators. Environmental indicators such as energy-related emissions with direct links to economic activities are widely covered, whereas indicators with complex natural science background such as water stress or biodiversity loss are hardly represented. Social indicators stand out for very weak coverage, mainly because they are vaguely defined or incommensurable. Our analysis identifies prospects for future modeling in the field of integrated assessment that link standard E3-CGE-models to themespecific complementary models with environmental and social focus. (author)
Directory of Open Access Journals (Sweden)
Guohua Fang
2016-09-01
Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.
Computable general equilibrium modelling in the context of trade and environmental policy
International Nuclear Information System (INIS)
Koesler, Simon Tobias
2014-01-01
This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.
Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden
International Nuclear Information System (INIS)
Hill, M.
2001-01-01
This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework
Transition towards a low carbon economy: A computable general equilibrium analysis for Poland
International Nuclear Information System (INIS)
Böhringer, Christoph; Rutherford, Thomas F.
2013-01-01
In the transition to sustainable economic structures the European Union assumes a leading role with its climate and energy package which sets ambitious greenhouse gas emission reduction targets by 2020. Among EU Member States, Poland with its heavy energy system reliance on coal is particularly worried on the pending trade-offs between emission regulation and economic growth. In our computable general equilibrium analysis of the EU climate and energy package we show that economic adjustment cost for Poland hinge crucially on restrictions to where-flexibility of emission abatement, revenue recycling, and technological options in the power system. We conclude that more comprehensive flexibility provisions at the EU level and a diligent policy implementation at the national level could achieve the transition towards a low carbon economy at little cost thereby broadening societal support. - Highlights: ► Economic impact assessment of the EU climate and energy package for Poland. ► Sensitivity analysis on where-flexibility, revenue recycling and technology choice. ► Application of a hybrid bottom-up, top-down CGE model
On techniques of ATR lattice computation
International Nuclear Information System (INIS)
1997-08-01
Lattice computation is to compute the average nuclear constants of unit fuel lattice which are required for computing core nuclear characteristics such as core power distribution and reactivity characteristics. The main nuclear constants are infinite multiplying rate, neutron movement area, cross section for diffusion computation, local power distribution and isotope composition. As for the lattice computation code, WIMS-ATR is used, which is based on the WIMS-D code developed in U.K., and for the purpose of heightening the accuracy of analysis, which was improved by adding heavy water scattering cross section considering the temperature dependence by Honeck model. For the computation of the neutron absorption by control rods, LOIEL BLUE code is used. The extrapolation distance of neutron flux on control rod surfaces is computed by using THERMOS and DTF codes, and the lattice constants of adjoining lattices are computed by using the WIMS-ATR code. As for the WIMS-ATR code, the computation flow and nuclear data library, and as for the LOIEL BLUE code, the computation flow are explained. The local power distribution in fuel assemblies determined by the WIMS-ATR code was verified with the measured data, and the results are reported. (K.I.)
Methods and experimental techniques in computer engineering
Schiaffonati, Viola
2014-01-01
Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.
International Nuclear Information System (INIS)
Broyd, T.W.; McD Grant, M.; Cross, J.E.
1985-01-01
This report describes two intercomparison studies of computer programs which respectively model: i) radionuclide migration ii) equilibrium chemistry of groundwaters. These studies have been performed by running a series of test cases with each program and comparing the various results obtained. The work forms a part of the CEC MIRAGE project (MIgration of RAdionuclides in the GEosphere) and has been jointly funded by the CEC and the United Kingdom Department of the Environment. Presentations of the material contained herein were given at plenary meetings of the MIRAGE project in Brussels in March, 1984 (migration) and March, 1985 (equilibrium chemistry) respectively
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
Tang, Yifeng; Akhavan, Rayhaneh
2014-11-01
A nested-LES wall-modeling approach for high Reynolds number, wall-bounded turbulence is presented. In this approach, a coarse-grained LES is performed in the full-domain, along with a nested, fine-resolution LES in a minimal flow unit. The coupling between the two domains is achieved by renormalizing the instantaneous LES velocity fields to match the profiles of kinetic energies of components of the mean velocity and velocity fluctuations in both domains to those of the minimal flow unit in the near-wall region, and to those of the full-domain in the outer region. The method is of fixed computational cost, independent of Reτ , in homogenous flows, and is O (Reτ) in strongly non-homogenous flows. The method has been applied to equilibrium turbulent channel flows at 1000 shear-driven, 3D turbulent channel flow at Reτ ~ 2000 . In equilibrium channel flow, the friction coefficient and the one-point turbulence statistics are predicted in agreement with Dean's correlation and available DNS and experimental data. In shear-driven, 3D channel flow, the evolution of turbulence statistics is predicted in agreement with experimental data of Driver & Hebbar (1991) in shear-driven, 3D boundary layer flow.
New Information Dispersal Techniques for Trustworthy Computing
Parakh, Abhishek
2011-01-01
Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…
Teachers of Advertising Media Courses Describe Techniques, Show Computer Applications.
Lancaster, Kent M.; Martin, Thomas C.
1989-01-01
Reports on a survey of university advertising media teachers regarding textbooks and instructional aids used, teaching techniques, computer applications, student placement, instructor background, and faculty publishing. (SR)
Cloud Computing Techniques for Space Mission Design
Arrieta, Juan; Senent, Juan
2014-01-01
The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.
Bringing Advanced Computational Techniques to Energy Research
Energy Technology Data Exchange (ETDEWEB)
Mitchell, Julie C
2012-11-17
Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.
Papior, Nick Rübner; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-01-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT–NEGF code handles devices with one or multiple electrodes (Ne≥1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable m...
Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.
2014-04-01
Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.
Nezarat, Amin; Dastghaibifard, G H
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.
The mineral sector and economic development in Ghana: A computable general equilibrium analysis
Addy, Samuel N.
A computable general equilibrium model (CGE) model is formulated for conducting mineral policy analysis in the context of national economic development for Ghana. The model, called GHANAMIN, places strong emphasis on production, trade, and investment. It can be used to examine both micro and macro economic impacts of policies associated with mineral investment, taxation, and terms of trade changes, as well as mineral sector performance impacts due to technological change or the discovery of new deposits. Its economywide structure enables the study of broader development policy with a focus on individual or multiple sectors, simultaneously. After going through a period of contraction for about two decades, mining in Ghana has rebounded significantly and is currently the main foreign exchange earner. Gold alone contributed 44.7 percent of 1994 total export earnings. GHANAMIN is used to investigate the economywide impacts of mineral tax policies, world market mineral prices changes, mining investment, and increased mineral exports. It is also used for identifying key sectors for economic development. Various simulations were undertaken with the following results: Recently implemented mineral tax policies are welfare increasing, but have an accompanying decrease in the output of other export sectors. World mineral price rises stimulate an increase in real GDP; however, this increase is less than real GDP decreases associated with price declines. Investment in the non-gold mining sector increases real GDP more than investment in gold mining, because of the former's stronger linkages to the rest of the economy. Increased mineral exports are very beneficial to the overall economy. Foreign direct investment (FDI) in mining increases welfare more so than domestic capital, which is very limited. Mining investment and the increased mineral exports since 1986 have contributed significantly to the country's economic recovery, with gold mining accounting for 95 percent of the
International Nuclear Information System (INIS)
Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru
2014-01-01
Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy
Kang, Yoonyoung
While vast resources have been invested in the development of computational models for cost-benefit analysis for the "whole world" or for the largest economies (e.g. United States, Japan, Germany), the remainder have been thrown together into one model for the "rest of the world." This study presents a multi-sectoral, dynamic, computable general equilibrium (CGE) model for Korea. This research evaluates the impacts of controlling COsb2 emissions using a multisectoral CGE model. This CGE economy-energy-environment model analyzes and quantifies the interactions between COsb2, energy and economy. This study examines interactions and influences of key environmental policy components: applied economic instruments, emission targets, and environmental tax revenue recycling methods. The most cost-effective economic instrument is the carbon tax. The economic effects discussed include impacts on main macroeconomic variables (in particular, economic growth), sectoral production, and the energy market. This study considers several aspects of various COsb2 control policies, such as the basic variables in the economy: capital stock and net foreign debt. The results indicate emissions might be stabilized in Korea at the expense of economic growth and with dramatic sectoral allocation effects. Carbon dioxide emissions stabilization could be achieved to the tune of a 600 trillion won loss over a 20 year period (1990-2010). The average annual real GDP would decrease by 2.10% over the simulation period compared to the 5.87% increase in the Business-as-Usual. This model satisfies an immediate need for a policy simulation model for Korea and provides the basic framework for similar economies. It is critical to keep the central economic question at the forefront of any discussion regarding environmental protection. How much will reform cost, and what does the economy stand to gain and lose? Without this model, the policy makers might resort to hesitation or even blind speculation. With
Soft Computing Techniques in Vision Science
Yang, Yeon-Mo
2012-01-01
This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies. It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...
Computer Architecture Techniques for Power-Efficiency
Kaxiras, Stefanos
2008-01-01
In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these
Exploiting Analytics Techniques in CMS Computing Monitoring
Energy Technology Data Exchange (ETDEWEB)
Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab
2017-11-22
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.
Computational optimization techniques applied to microgrids planning
DEFF Research Database (Denmark)
Gamarra, Carlos; Guerrero, Josep M.
2015-01-01
Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...
Spalding, D. B.; Launder, B. E.; Morse, A. P.; Maples, G.
1974-01-01
A guide to a computer program, written in FORTRAN 4, for predicting the flow properties of turbulent mixing with combustion of a circular jet of hydrogen into a co-flowing stream of air is presented. The program, which is based upon the Imperial College group's PASSA series, solves differential equations for diffusion and dissipation of turbulent kinetic energy and also of the R.M.S. fluctuation of hydrogen concentration. The effective turbulent viscosity for use in the shear stress equation is computed. Chemical equilibrium is assumed throughout the flow.
A simple and sensitive separation technique of 99Mo and 99mTc from their equilibrium mixture
International Nuclear Information System (INIS)
Swadesh Mandal; Ajoy Mandal
2014-01-01
The present work describes a simple and inexpensive separation method of 99 Mo from the equilibrium mixture. The liquid-liquid extraction technique has been employed to separate 99 Mo and 99m Tc using triisooctylamine (TIOA). The 99 Mo and 99m Tc were quantitatively separated out in 2 M TIOA with tripled distilled water; 99m Tc was back extracted from TIOA organic phase to aqueous phase by 0.1 M DTPA. The species information or indirect speciation of molybdenum was also established by the extraction profile of the molybdenum. (author)
Transport modeling and advanced computer techniques
International Nuclear Information System (INIS)
Wiley, J.C.; Ross, D.W.; Miner, W.H. Jr.
1988-11-01
A workshop was held at the University of Texas in June 1988 to consider the current state of transport codes and whether improved user interfaces would make the codes more usable and accessible to the fusion community. Also considered was the possibility that a software standard could be devised to ease the exchange of routines between groups. It was noted that two of the major obstacles to exchanging routines now are the variety of geometrical representation and choices of units. While the workshop formulated no standards, it was generally agreed that good software engineering would aid in the exchange of routines, and that a continued exchange of ideas between groups would be worthwhile. It seems that before we begin to discuss software standards we should review the current state of computer technology --- both hardware and software to see what influence recent advances might have on our software goals. This is done in this paper
International Nuclear Information System (INIS)
Lapillonne, X.; Brunner, S.; Dannert, T.; Jolliet, S.; Marinoni, A.; Villard, L.; Goerler, T.; Jenko, F.; Merz, F.
2009-01-01
In the context of gyrokinetic flux-tube simulations of microturbulence in magnetized toroidal plasmas, different treatments of the magnetic equilibrium are examined. Considering the Cyclone DIII-D base case parameter set [Dimits et al., Phys. Plasmas 7, 969 (2000)], significant differences in the linear growth rates, the linear and nonlinear critical temperature gradients, and the nonlinear ion heat diffusivities are observed between results obtained using either an s-α or a magnetohydrodynamic (MHD) equilibrium. Similar disagreements have been reported previously [Redd et al., Phys. Plasmas 6, 1162 (1999)]. In this paper it is shown that these differences result primarily from the approximation made in the standard implementation of the s-α model, in which the straight field line angle is identified to the poloidal angle, leading to inconsistencies of order ε (ε=a/R is the inverse aspect ratio, a the minor radius and R the major radius). An equilibrium model with concentric, circular flux surfaces and a correct treatment of the straight field line angle gives results very close to those using a finite ε, low β MHD equilibrium. Such detailed investigation of the equilibrium implementation is of particular interest when comparing flux tube and global codes. It is indeed shown here that previously reported agreements between local and global simulations in fact result from the order ε inconsistencies in the s-α model, coincidentally compensating finite ρ * effects in the global calculations, where ρ * =ρ s /a with ρ s the ion sound Larmor radius. True convergence between local and global simulations is finally obtained by correct treatment of the geometry in both cases, and considering the appropriate ρ * →0 limit in the latter case.
Evolutionary Computation Techniques for Predicting Atmospheric Corrosion
Directory of Open Access Journals (Sweden)
Amine Marref
2013-01-01
Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.
A computer graphics display technique for the examination of aircraft design data
Talcott, N. A., Jr.
1981-01-01
An interactive computer graphics technique has been developed for quickly sorting and interpreting large amounts of aerodynamic data. It utilizes a graphic representation rather than numbers. The geometry package represents the vehicle as a set of panels. These panels are ordered in groups of ascending values (e.g., equilibrium temperatures). The groups are then displayed successively on a CRT building up to the complete vehicle. A zoom feature allows for displaying only the panels with values between certain limits. The addition of color allows a one-time display thus eliminating the need for a display build up.
Probability, statistics, and associated computing techniques
International Nuclear Information System (INIS)
James, F.
1983-01-01
This chapter attempts to explore the extent to which it is possible for the experimental physicist to find optimal statistical techniques to provide a unique and unambiguous quantitative measure of the significance of raw data. Discusses statistics as the inverse of probability; normal theory of parameter estimation; normal theory (Gaussian measurements); the universality of the Gaussian distribution; real-life resolution functions; combination and propagation of uncertainties; the sum or difference of 2 variables; local theory, or the propagation of small errors; error on the ratio of 2 discrete variables; the propagation of large errors; confidence intervals; classical theory; Bayesian theory; use of the likelihood function; the second derivative of the log-likelihood function; multiparameter confidence intervals; the method of MINOS; least squares; the Gauss-Markov theorem; maximum likelihood for uniform error distribution; the Chebyshev fit; the parameter uncertainties; the efficiency of the Chebyshev estimator; error symmetrization; robustness vs. efficiency; testing of hypotheses (e.g., the Neyman-Pearson test); goodness-of-fit; distribution-free tests; comparing two one-dimensional distributions; comparing multidimensional distributions; and permutation tests for comparing two point sets
Impact of a carbon tax on the Chilean economy: A computable general equilibrium analysis
International Nuclear Information System (INIS)
García Benavente, José Miguel
2016-01-01
In 2009, the government of Chile announced their official commitment to reduce national greenhouse gas emissions by 20% below a business-as-usual projection by 2020. Due to the fact that an effective way to reduce emissions is to implement a national carbon tax, the goal of this article is to quantify the value of a carbon tax that will allow the achievement of the emission reduction target and to assess its impact on the economy. The approach used in this work is to compare the economy before and after the implementation of the carbon tax by creating a static computable general equilibrium model of the Chilean economy. The model developed here disaggregates the economy in 23 industries and 23 commodities, and it uses four consumer agents (households, government, investment, and the rest of the world). By setting specific production and consumptions functions, the model can assess the variation in commodity prices, industrial production, and agent consumption, allowing a cross-sectoral analysis of the impact of the carbon tax. The benchmark of the economy, upon which the analysis is based, came from a social accounting matrix specially constructed for this model, based on the year 2010. The carbon tax was modeled as an ad valorem tax under two scenarios: tax on emissions from fossil fuels burned only by producers and tax on emissions from fossil fuels burned by producers and households. The abatement cost curve has shown that it is more cost-effective to tax only producers, rather than to tax both producers and households. This is due to the fact that when compared to the emission level observed in 2010, a 20% emission reduction will cause a loss in GDP of 2% and 2.3% respectively. Under the two scenarios, the tax value that could lead to that emission reduction is around 26 US dollars per ton of CO_2-equivalent. The most affected productive sectors are oil refinery, transport, and electricity — having a contraction between 7% and 9%. Analyzing the electricity
Computational techniques used in the development of coprocessing flowsheets
International Nuclear Information System (INIS)
Groenier, W.S.; Mitchell, A.D.; Jubin, R.T.
1979-01-01
The computer program SEPHIS, developed to aid in determining optimum solvent extraction conditions for the reprocessing of nuclear power reactor fuels by the Purex method, is described. The program employs a combination of approximate mathematical equilibrium expressions and a transient, stagewise-process calculational method to allow stage and product-stream concentrations to be predicted with accuracy and reliability. The possible applications to inventory control for nuclear material safeguards, nuclear criticality analysis, and process analysis and control are of special interest. The method is also applicable to other counntercurrent liquid--liquid solvent extraction processes having known chemical kinetics, that may involve multiple solutes and are performed in conventional contacting equipment
African Journals Online (AJOL)
context of antimicrobial therapy in malnutrition. Dialysis has in the past presented technical problems, being complicated and time-consuming. A new dialysis system based on the equilibrium technique has now become available, and it is the principles and practical application of this apparatus (Kontron Diapack; Kontron.
THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS
Mary Violeta Bar
2014-01-01
The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...
Numerical Computational Technique for Scattering from Underwater Objects
T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar
2013-01-01
This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...
Analyzing the Effects of Technological Change: A Computable General Equilibrium Approach
1988-09-01
present important simplifying assumptions about the nature of consumer preferences and production possibility sets. If a general equilibrium model...important assumptions are in such areas as consumer preferences , the actions of the government, and the financial structure of the model. Each of these is...back in the future. 4.3.2 Consumer demand Consumer preferences are a second important modeling assumption affecting the results of the study. The PILOT
Taheripour, Farzad; Hertel, Thomas W.; Tyner, Wallace E.
2009-01-01
In this paper, we offer a general equilibrium analysis of the impacts of US and EU biofuel mandates for the global livestock sector. Our simulation boosts biofuel production in the US and EU from 2006 levels to mandated 2015 levels. We show that mandates will encourage crop production in both biofuel and non biofuel producing regions, while reducing livestock and livestock production in most regions of the world. The non-ruminant industry curtails its production more than other livestock indu...
Addition to the Lewis Chemical Equilibrium Program to allow computation from coal composition data
Sevigny, R.
1980-01-01
Changes made to the Coal Gasification Project are reported. The program was developed by equilibrium combustion in rocket engines. It can be applied directly to the entrained flow coal gasification process. The particular problem addressed is the reduction of the coal data into a form suitable to the program, since the manual process is involved and error prone. A similar problem in relating the normal output of the program to parameters meaningful to the coal gasification process is also addressed.
Collapse and equilibrium of rotating, adiabatic clouds
International Nuclear Information System (INIS)
Boss, A.P.
1980-01-01
A numerical hydrodynamics computer code has been used to follow the collapse and establishment of equilibrium of adiabatic gas clouds restricted to axial symmetry. The clouds are initially uniform in density and rotation, with adiabatic exponents γ=5/3 and 7/5. The numerical technique allows, for the first time, a direct comparison to be made between the dynamic collapse and approach to equilibrium of unconstrained clouds on the one hand, and the results for incompressible, uniformly rotating equilibrium clouds, and the equilibrium structures of differentially rotating polytropes, on the other hand
Future trends in power plant process computer techniques
International Nuclear Information System (INIS)
Dettloff, K.
1975-01-01
The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de
Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries
Directory of Open Access Journals (Sweden)
Cameron Adler
2018-04-01
Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.
A comparative analysis of soft computing techniques for gene prediction.
Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand
2013-07-01
The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow
Energy Technology Data Exchange (ETDEWEB)
Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)
1996-12-31
We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.
Dynamic circular buffering: a technique for equilibrium gated blood pool imaging.
Vaquero, J J; Rahms, H; Green, M V; Del Pozo, F
1996-03-01
We have devised a software technique called "dynamic circular buffering" (DCB) with which we create a gated blood pool image sequence of the heart in real time using the best features of LIST and FRAME mode methods of acquisition/processing. The routine is based on the concept of independent "agents" acting on the timing and position data continuously written into the DCB. This approach allows efficient asynchronous operation on PC-type machines and enhanced capability on systems capable of true multiprocessing and multithreading.
[Cardiac computed tomography: new applications of an evolving technique].
Martín, María; Corros, Cecilia; Calvo, Juan; Mesa, Alicia; García-Campos, Ana; Rodríguez, María Luisa; Barreiro, Manuel; Rozado, José; Colunga, Santiago; de la Hera, Jesús M; Morís, César; Luyando, Luis H
2015-01-01
During the last years we have witnessed an increasing development of imaging techniques applied in Cardiology. Among them, cardiac computed tomography is an emerging and evolving technique. With the current possibility of very low radiation studies, the applications have expanded and go further coronariography In the present article we review the technical developments of cardiac computed tomography and its new applications. Copyright © 2014 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.
Modeling with data tools and techniques for scientific computing
Klemens, Ben
2009-01-01
Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods
Cloud computing and digital media fundamentals, techniques, and applications
Li, Kuan-Ching; Shih, Timothy K
2014-01-01
Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo
Computed tomography of the llama head: technique and normal anatomy
International Nuclear Information System (INIS)
Hathcock, J.T.; Pugh, D.G.; Cartee, R.E.; Hammond, L.
1996-01-01
Computed tomography was performed on the head of 6 normal adult llamas. The animals were under general anesthesia and positioned in dorsal recumbency on the scanning table. The area scanned was from the external occipital protuberance to the rostral portion of the nasal passage, and the images are presented in both a bone window and a soft tissue window to allow evaluation and identification of the anatomy of the head. Computed tomography of the llama head can be accomplished by most computed tomography scanners utilizing a technique similar to that used in small animals with minor modification of the scanning table
International Nuclear Information System (INIS)
Bunshah, R.F.
1976-01-01
A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials
Energy Technology Data Exchange (ETDEWEB)
Worms, Isabelle A.M. [CABE - Analytical and Biophysical Environmental Chemistry, University of Geneva, 30 quai Ernest Ansermet 1211 Geneva 4 (Switzerland); Wilkinson, Kevin J. [Department of Chemistry, University of Montreal C.P. 6128, succursale Centre-ville Montreal, H3C 3J7 (Canada)], E-mail: KJ.Wilkinson@umontreal.ca
2008-05-26
In natural waters, the determination of free metal concentrations is a key parameter for studying bioavailability. Unfortunately, few analytical tools are available for determining Ni speciation at the low concentrations found in natural waters. In this paper, an ion exchange technique (IET) that employs a Dowex resin is evaluated for its applicability to measure [Ni{sup 2+}] in freshwaters. The presence of major cations (e.g. Na, Ca and Mg) reduced both the times that were required for equilibration and the partition coefficient to the resin ({lambda}{sup '}{sub Ni}). IET measurements of [Ni{sup 2+}] in the presence of known ligands (citrate, diglycolate, sulfoxine, oxine and diethyldithiocarbamate) were verified by thermodynamic speciation models (MINEQL{sup +} and VisualMINTEQ). Results indicated that the presence of hydrophobic complexes (e.g. Ni(DDC){sub 2}{sup 0}) lead to an overestimation of the Ni{sup 2+} fraction. On the other hand, [Ni{sup 2+}] measurements that were made in the presence of amphiphilic complexes formed with humic substances (standard aquatic humic acid (SRHA) and standard aquatic fulvic acid (SRFA)) were well correlated to free ion concentrations that were calculated using a NICA-DONNAN model. An analytical method is also presented here to reduce the complexity of the calibration (due to the presence of many other cations) for the use of Dowex equilibrium ion exchange technique in natural waters.
GEM-E3: A computable general equilibrium model applied for Switzerland
Energy Technology Data Exchange (ETDEWEB)
Bahn, O. [Paul Scherrer Inst., CH-5232 Villigen PSI (Switzerland); Frei, C. [Ecole Polytechnique Federale de Lausanne (EPFL) and Paul Scherrer Inst. (Switzerland)
2000-01-01
The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO{sub 2} emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO{sub 2} emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO{sub 2} emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve
GEM-E3: A computable general equilibrium model applied for Switzerland
International Nuclear Information System (INIS)
Bahn, O.; Frei, C.
2000-01-01
The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO 2 emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO 2 emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO 2 emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve the reduction target, and use the tax
Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.
Montalvo-Acosta, Joel José; Cecchini, Marco
2016-12-01
The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES
DEFF Research Database (Denmark)
Sonnenwald, Diane H.
1988-01-01
A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....
Computer Tomography: A Novel Diagnostic Technique used in Horses
African Journals Online (AJOL)
In Veterinary Medicine, Computer Tomography (CT scan) is used more often in dogs and cats than in large animals due to their small size and ease of manipulation. This paper, however, illustrates the use of the technique in horses. CT scan was used in the diagnosis of two conditions of the head and limbs, namely alveolar ...
A survey of energy saving techniques for mobile computers
Smit, Gerardus Johannes Maria; Havinga, Paul J.M.
1997-01-01
Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for
Fusion of neural computing and PLS techniques for load estimation
Energy Technology Data Exchange (ETDEWEB)
Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)
2007-07-01
A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.
Visualization of Minkowski operations by computer graphics techniques
Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P
1994-01-01
We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are
A Computer Aided System for Correlation and Prediction of Phase Equilibrium Data
DEFF Research Database (Denmark)
Nielsen, T.L.; Gani, Rafiqul
2001-01-01
based on mathematical programming. This paper describes the development of a computer aided system for the systematic derivation of appropriate property models to be used in the service role for a specified problem. As a first step, a library of well-known property models ha's been developed...
A Computer-Aided Exercise for Checking Novices' Understanding of Market Equilibrium Changes.
Katz, Arnold
1999-01-01
Describes a computer-aided supplement to the introductory microeconomics course that enhances students' understanding with simulation-based tools for reviewing what they have learned from lectures and conventional textbooks about comparing market equilibria. Includes a discussion of students' learning progressions and retention after using the…
Gordon, S.; Mcbride, B.; Zeleznik, F. J.
1984-01-01
An addition to the computer program of NASA SP-273 is given that permits transport property calculations for the gaseous phase. Approximate mixture formulas are used to obtain viscosity and frozen thermal conductivity. Reaction thermal conductivity is obtained by the same method as in NASA TN D-7056. Transport properties for 154 gaseous species were selected for use with the program.
Directory of Open Access Journals (Sweden)
Yongxiu He
2014-04-01
Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.
Calzadilla, Alvaro; Rehdanz, Katrin; Tol, Richard S. J.
2010-04-01
SummaryAgriculture is the largest consumer of freshwater resources - around 70 percent of all freshwater withdrawals are used for food production. These agricultural products are traded internationally. A full understanding of water use is, therefore, impossible without understanding the international market for food and related products, such as textiles. Based on the global general equilibrium model GTAP-W, we offer a method for investigating the role of green (rain) and blue (irrigation) water resources in agriculture and within the context of international trade. We use future projections of allowable water withdrawals for surface water and groundwater to define two alternative water management scenarios. The first scenario explores a deterioration of current trends and policies in the water sector (water crisis scenario). The second scenario assumes an improvement in policies and trends in the water sector and eliminates groundwater overdraft world-wide, increasing water allocation for the environment (sustainable water use scenario). In both scenarios, welfare gains or losses are not only associated with changes in agricultural water consumption. Under the water crisis scenario, welfare not only rises for regions where water consumption increases (China, South East Asia and the USA). Welfare gains are considerable for Japan and South Korea, Southeast Asia and Western Europe as well. These regions benefit from higher levels of irrigated production and lower food prices. Alternatively, under the sustainable water use scenario, welfare losses not only affect regions where overdrafting is occurring. Welfare decreases in other regions as well. These results indicate that, for water use, there is a clear trade-off between economic welfare and environmental sustainability.
Bone tissue engineering scaffolding: computer-aided scaffolding techniques.
Thavornyutikarn, Boonlom; Chantarapanich, Nattapon; Sitthiseripratip, Kriskrai; Thouas, George A; Chen, Qizhi
Tissue engineering is essentially a technique for imitating nature. Natural tissues consist of three components: cells, signalling systems (e.g. growth factors) and extracellular matrix (ECM). The ECM forms a scaffold for its cells. Hence, the engineered tissue construct is an artificial scaffold populated with living cells and signalling molecules. A huge effort has been invested in bone tissue engineering, in which a highly porous scaffold plays a critical role in guiding bone and vascular tissue growth and regeneration in three dimensions. In the last two decades, numerous scaffolding techniques have been developed to fabricate highly interconnective, porous scaffolds for bone tissue engineering applications. This review provides an update on the progress of foaming technology of biomaterials, with a special attention being focused on computer-aided manufacturing (Andrade et al. 2002) techniques. This article starts with a brief introduction of tissue engineering (Bone tissue engineering and scaffolds) and scaffolding materials (Biomaterials used in bone tissue engineering). After a brief reviews on conventional scaffolding techniques (Conventional scaffolding techniques), a number of CAM techniques are reviewed in great detail. For each technique, the structure and mechanical integrity of fabricated scaffolds are discussed in detail. Finally, the advantaged and disadvantage of these techniques are compared (Comparison of scaffolding techniques) and summarised (Summary).
International Nuclear Information System (INIS)
Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen
2007-01-01
The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy
International Nuclear Information System (INIS)
Reiber, J.H.C.; Lie, S.P.; Simoons, M.L.; Hoek, C.; Gerbrands, J.J.; Wijns, W.; Bakker, W.H.; Kooij, P.P.M.
1983-01-01
A fully automated procedure for the computation of left-ventricular ejection fraction (EF) from cardiac-gated Tc-99m blood-pool (GBP) scintigrams with fixed, dual, and variable ROI methods is described. By comparison with EF data from contrast ventriculography in 68 patients, the dual-ROI method (separate end-diastolic and end-systolic contours) was found to be the method of choice; processing time was 2 min. Success score of dual-ROI procedure was 92% as assessed from 100 GBP studies. Overall reproducibility of data acquisition and analysis was determined in 12 patients. Mean value and standard deviation of differences between repeat studies (average time interval 27 min) were 0.8% and 4.3% EF units, respectively, (r=0.98). The authors conclude that left-ventricular EF can be computed automatically from GBP scintigrams with minimal operator-interaction and good reproducibility; EFs are similar to those from contrast ventriculography
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing
A review of metaheuristic scheduling techniques in cloud computing
Directory of Open Access Journals (Sweden)
Mala Kalra
2015-11-01
Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.
Directory of Open Access Journals (Sweden)
Samson Abramsky
2015-11-01
Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.
Training Software in Artificial-Intelligence Computing Techniques
Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene
2005-01-01
The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
Three-dimensional integrated CAE system applying computer graphic technique
International Nuclear Information System (INIS)
Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.
1991-01-01
A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)
Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E
1951-01-01
A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.
Levine, J. N.
1971-01-01
A finite difference turbulent boundary layer computer program has been developed. The program is primarily oriented towards the calculation of boundary layer performance losses in rocket engines; however, the solution is general, and has much broader applicability. The effects of transpiration and film cooling as well as the effect of equilibrium chemical reactions (currently restricted to the H2-O2 system) can be calculated. The turbulent transport terms are evaluated using the phenomenological mixing length - eddy viscosity concept. The equations of motion are solved using the Crank-Nicolson implicit finite difference technique. The analysis and computer program have been checked out by solving a series of both laminar and turbulent test cases and comparing the results to data or other solutions. These comparisons have shown that the program is capable of producing very satisfactory results for a wide range of flows. Further refinements to the analysis and program, especially as applied to film cooling solutions, would be aided by the acquisition of a firm data base.
Computer-Assisted Technique for Surgical Tooth Extraction
Directory of Open Access Journals (Sweden)
Hosamuddin Hamza
2016-01-01
Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.
Experimental data processing techniques by a personal computer
International Nuclear Information System (INIS)
Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.
1989-01-01
A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)
International Nuclear Information System (INIS)
Kelleher, W.P.; Steiner, D.
1989-01-01
A personal-computer (PC)-based calculational approach assesses magnetohydrodynamic (MHD) equilibrium and poloidal field (PF) coil arrangement in a highly interactive mode, well suited for tokamak scoping studies. The system developed involves a two-step process: the MHD equilibrium is calculated and then a PF coil arrangement, consistent with the equilibrium is determined in an interactive design environment. In this paper the approach is used to examine four distinctly different toroidal configurations: the STARFIRE rector, a spherical torus (ST), the Big Dee, and an elongated tokamak. In these applications the PC-based results are benchmarked against those of a mainframe code for STARFIRE, ST, and Big Dee. The equilibrium and PF coil arrangement calculations obtained with the PC approach agree within a few percent with those obtained with the mainframe code
International Conference on Soft Computing Techniques and Engineering Application
Li, Xiaolong
2014-01-01
The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.
Vinhal, Jonas O; Nege, Kassem K; Lage, Mateus R; de M Carneiro, José Walkimar; Lima, Claudio F; Cassella, Ricardo J
2017-11-01
This work reports a study about the adsorption of the herbicides diquat and difenzoquat from aqueous medium employing polyurethane foam (PUF) as the adsorbent and sodium dodecylsulfate (SDS) as the counter ion. The adsorption efficiency was shown to be dependent on the concentration of SDS in solution, since the formation of an ion-associate between cationic herbicides (diquat and difenzoquat) and anionic dodecylsulfate is a fundamental step of the process. A computational study was carried out to identify the possible structure of the ion-associates that are formed in solution. They are probably formed by three units of dodecylsulfate bound to one unit of diquat, and two units of dodecylsulfate bound to one unit of difenzoquat. The results obtained also showed that 95% of both herbicides present in 45mL of a solution containing 5.5mgL -1 could be retained by 300mg of PUF. The experimental data were well adjusted to the Freundlich isotherm (r 2 ≥ 0.95) and to the pseudo-second-order kinetic equation. Also, the application of Morris-Weber and Reichenberg equations indicated that an intraparticle diffusion process is active in the control of adsorption kinetics. Copyright © 2017 Elsevier Inc. All rights reserved.
Development of a computational technique to measure cartilage contact area.
Willing, Ryan; Lapner, Michael; Lalone, Emily A; King, Graham J W; Johnson, James A
2014-03-21
Computational measurement of joint contact distributions offers the benefit of non-invasive measurements of joint contact without the use of interpositional sensors or casting materials. This paper describes a technique for indirectly measuring joint contact based on overlapping of articular cartilage computer models derived from CT images and positioned using in vitro motion capture data. The accuracy of this technique when using the physiological nonuniform cartilage thickness distribution, or simplified uniform cartilage thickness distributions, is quantified through comparison with direct measurements of contact area made using a casting technique. The efficacy of using indirect contact measurement techniques for measuring the changes in contact area resulting from hemiarthroplasty at the elbow is also quantified. Using the physiological nonuniform cartilage thickness distribution reliably measured contact area (ICC=0.727), but not better than the assumed bone specific uniform cartilage thicknesses (ICC=0.673). When a contact pattern agreement score (s(agree)) was used to assess the accuracy of cartilage contact measurements made using physiological nonuniform or simplified uniform cartilage thickness distributions in terms of size, shape and location, their accuracies were not significantly different (p>0.05). The results of this study demonstrate that cartilage contact can be measured indirectly based on the overlapping of cartilage contact models. However, the results also suggest that in some situations, inter-bone distance measurement and an assumed cartilage thickness may suffice for predicting joint contact patterns. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jet-images: computer vision inspired techniques for jet tagging
Energy Technology Data Exchange (ETDEWEB)
Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)
2015-02-18
We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.
Jet-images: computer vision inspired techniques for jet tagging
International Nuclear Information System (INIS)
Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel
2015-01-01
We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.
Determining flexor-tendon repair techniques via soft computing
Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.
2001-01-01
An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.
Development of computer-aided auto-ranging technique for a computed radiography system
International Nuclear Information System (INIS)
Ishida, M.; Shimura, K.; Nakajima, N.; Kato, H.
1988-01-01
For a computed radiography system, the authors developed a computer-aided autoranging technique in which the clinically useful image data are automatically mapped to the available display range. The preread image data are inspected to determine the location of collimation. A histogram of the pixels inside the collimation is evaluated regarding characteristic values such as maxima and minima, and then the optimal density and contrast are derived for the display image. The effect of the autoranging technique was investigated at several hospitals in Japan. The average rate of films lost due to undesirable density or contrast was about 0.5%
A computer code to simulate X-ray imaging techniques
International Nuclear Information System (INIS)
Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel
2000-01-01
A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests
A computer code to simulate X-ray imaging techniques
Energy Technology Data Exchange (ETDEWEB)
Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel
2000-09-01
A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.
Qin, Changbo; Qin, C.; Su, Zhongbo; Bressers, Johannes T.A.; Jia, Y.; Wang, H.
2013-01-01
This paper describes a multi-region computable general equilibrium model for analyzing the effectiveness of measures and policies for mitigating North China’s water scarcity with respect to three different groups of scenarios. The findings suggest that a reduction in groundwater use would negatively
Computational intelligence techniques for biological data mining: An overview
Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari
2014-10-01
Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.
GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique
Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.
2015-12-01
Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).
International Nuclear Information System (INIS)
Prevosto, L.; Mancinelli, B.; Artana, G.; Kelly, H.
2011-01-01
A two-wavelength quantitative Schlieren technique that allows inferring the electron and gas densities of axisymmetric arc plasmas without imposing any assumption regarding statistical equilibrium models is reported. This technique was applied to the study of local thermodynamic equilibrium (LTE) departures within the core of a 30 A high-energy density cutting arc. In order to derive the electron and heavy particle temperatures from the inferred density profiles, a generalized two-temperature Saha equation together with the plasma equation of state and the quasineutrality condition were employed. Factors such as arc fluctuations that influence the accuracy of the measurements and the validity of the assumptions used to derive the plasma species temperature were considered. Significant deviations from chemical equilibrium as well as kinetic equilibrium were found at elevated electron temperatures and gas densities toward the arc core edge. An electron temperature profile nearly constant through the arc core with a value of about 14000-15000 K, well decoupled from the heavy particle temperature of about 1500 K at the arc core edge, was inferred.
Experimental and Computational Techniques in Soft Condensed Matter Physics
Olafsen, Jeffrey
2010-09-01
1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.
Phase behavior of multicomponent membranes: Experimental and computational techniques
DEFF Research Database (Denmark)
Bagatolli, Luis; Kumar, P.B. Sunil
2009-01-01
Recent developments in biology seems to indicate that the Fluid Mosaic model of membrane proposed by Singer and Nicolson, with lipid bilayer functioning only as medium to support protein machinery, may be too simple to be realistic. Many protein functions are now known to depend on the compositio....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Brignole, Esteban Alberto
2013-01-01
Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and
Wan, Yue; Yang, Hongwei; Masui, Toshihiko
2005-01-01
At the present time, ambient air pollution is a serious public health problem in China. Based on the concentration-response relationship provided by international and domestic epidemiologic studies, the authors estimated the mortality and morbidity induced by the ambient air pollution of 2000. To address the mechanism of the health impact on the national economy, the authors applied a computable general equilibrium (CGE) model, named AIM/Material China, containing 39 production sectors and 32 commodities. AIM/Material analyzes changes of the gross domestic product (GDP), final demand, and production activity originating from health damages. If ambient air quality met Grade II of China's air quality standard in 2000, then the avoidable GDP loss would be 0.38%o of the national total, of which 95% was led by labor loss. Comparatively, medical expenditure had less impact on national economy, which is explained from the aspect of the final demand by commodities and the production activities by sectors. The authors conclude that the CGE model is a suitable tool for assessing health impacts from a point of view of national economy through the discussion about its applicability.
Directory of Open Access Journals (Sweden)
Tian Wu
2014-11-01
Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.
Energy Technology Data Exchange (ETDEWEB)
Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois
2013-01-01
The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.
International Nuclear Information System (INIS)
He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.
2015-01-01
Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved
International Nuclear Information System (INIS)
Chialvo, A.A.; Debenedetti, P.G.
1991-01-01
To date, the calculation of shear viscosity for soft-core fluids via equilibrium molecular dynamics has been done almost exclusively using the Green-Kubo formalism. The alternative mean-squared displacement approach has not been used, except for hard-sphere fluids, in which case the expression proposed by Helfand [Phys. Rev. 119, 1 (1960)] has invariably been selected. When written in the form given by McQuarrie [Statistical Mechanics (Harper ampersand Row, New York, 1976), Chap. 21], however, the mean-squared displacement approach offers significant computational advantages over both its Green-Kubo and Helfand counterparts. In order to achieve comparable statistical significance, the number of experiments needed when using the Green-Kubo or Helfand formalisms is more than an order of magnitude higher than for the McQuarrie expression. For pairwise-additive systems with zero linear momentum, the McQuarrie method yields frame-independent shear viscosities. The hitherto unexplored McQuarrie implementation of the mean-squared displacement approach to shear-viscosity calculation thus appears superior to alternative methods currently in use
International Nuclear Information System (INIS)
Shiotani, Hiroki; Ono, Kiyoshi
2009-01-01
The Global Trade and Analysis Project (GTAP) is a widely used computable general equilibrium (CGE) model developed by Purdue University. Although the GTAP-E, an energy environmental version of the GTAP model, is useful for surveying the energy-economy-environment-trade linkage is economic policy analysis, it does not have the decomposed model of the electricity sector and its analyses are comparatively static. In this study, a recursive dynamic CGE model with a detailed electricity technology bundle with nuclear power generation including FR was developed based on the GTAP-E to evaluate the long-term socioeconomic effects of FR deployment. The capital stock changes caused by international investments and some dynamic constraints of the FR deployment and operation (e.g., load following capability and plutonium mass balance) were incorporated in the analyses. The long-term socioeconomic effects resulting from the deployment of economic competitive FR with innovative technologies can be assessed; the cumulative effects of the FR deployment on GDP calculated using this model costed over 40 trillion yen in Japan and 400 trillion yen worldwide, which were several times more than the cost of the effects calculated using the conventional cost-benefit analysis tool, because of ripple effects and energy substitutions among others. (author)
A Computer Based Moire Technique To Measure Very Small Displacements
Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.
1987-02-01
The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.
APPLICATION OF OBJECT ORIENTED PROGRAMMING TECHNIQUES IN FRONT END COMPUTERS
International Nuclear Information System (INIS)
SKELLY, J.F.
1997-01-01
The Front End Computer (FEC) environment imposes special demands on software, beyond real time performance and robustness. FEC software must manage a diverse inventory of devices with individualistic timing requirements and hardware interfaces. It must implement network services which export device access to the control system at large, interpreting a uniform network communications protocol into the specific control requirements of the individual devices. Object oriented languages provide programming techniques which neatly address these challenges, and also offer benefits in terms of maintainability and flexibility. Applications are discussed which exhibit the use of inheritance, multiple inheritance and inheritance trees, and polymorphism to address the needs of FEC software
Computer vision techniques for the diagnosis of skin cancer
Celebi, M
2014-01-01
The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...
Template matching techniques in computer vision theory and practice
Brunelli, Roberto
2009-01-01
The detection and recognition of objects in images is a key research topic in the computer vision community. Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...
Meloni, Roberto; Camilloni, Carlo; Tiana, Guido
2014-02-11
The denatured state of polypeptides and proteins, stabilized by chemical denaturants like urea and guanidine chloride, displays residual secondary structure when studied by nuclear-magnetic-resonance spectroscopy. However, these experimental techniques are weakly sensitive, and thus molecular-dynamics simulations can be useful to complement the experimental findings. To sample the denatured state, we made use of massively-parallel computers and of a variant of the replica exchange algorithm, in which the different branches, connected with unbiased replicas, favor the formation and disruption of local secondary structure. The algorithm is applied to the second hairpin of GB1 in water, in urea, and in guanidine chloride. We show with the help of different criteria that the simulations converge to equilibrium. It results that urea and guanidine chloride, besides inducing some polyproline-II structure, have different effect on the hairpin. Urea disrupts completely the native region and stabilizes a state which resembles a random coil, while guanidine chloride has a milder effect.
Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M
2016-01-01
The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.
A computational technique for turbulent flow of wastewater sludge.
Bechtel, Tom B
2005-01-01
A computational fluid dynamics (CFD) technique applied to the turbulent flow of wastewater sludge in horizontal, smooth-wall, circular pipes is presented. The technique uses the Crank-Nicolson finite difference method in conjunction with the variable secant method, an algorithm for determining the pressure gradient of the flow. A simple algebraic turbulence model is used. A Bingham-plastic rheological model is used to describe the shear stress/shear rate relationship for the wastewater sludge. The method computes velocity gradient and head loss, given a fixed volumetric flow, pipe size, and solids concentration. Solids concentrations ranging from 3 to 10% (by weight) and nominal pipe sizes from 0.15 m (6 in.) to 0.36 m (14 in.) are studied. Comparison of the CFD results for water to established values serves to validate the numerical method. The head loss results are presented in terms of a head loss ratio, R(hl), which is the ratio of sludge head loss to water head loss. An empirical equation relating R(hl) to pipe velocity and solids concentration, derived from the results of the CFD calculations, is presented. The results are compared with published values of Rhl for solids concentrations of 3 and 6%. A new expression for the Fanning friction factor for wastewater sludge flow is also presented.
Computational techniques for inelastic analysis and numerical experiments
International Nuclear Information System (INIS)
Yamada, Y.
1977-01-01
A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)
Development of computational technique for labeling magnetic flux-surfaces
International Nuclear Information System (INIS)
Nunami, Masanori; Kanno, Ryutaro; Satake, Shinsuke; Hayashi, Takaya; Takamaru, Hisanori
2006-03-01
In recent Large Helical Device (LHD) experiments, radial profiles of ion temperature, electric field, etc. are measured in the m/n=1/1 magnetic island produced by island control coils, where m is the poloidal mode number and n the toroidal mode number. When the transport of the plasma in the radial profiles is numerically analyzed, an average over a magnetic flux-surface in the island is a very useful concept to understand the transport. On averaging, a proper labeling of the flux-surfaces is necessary. In general, it is not easy to label the flux-surfaces in the magnetic field with the island, compared with the case of a magnetic field configuration having nested flux-surfaces. In the present paper, we have developed a new computational technique to label the magnetic flux-surfaces. This technique is constructed by using an optimization algorithm, which is known as an optimization method called the simulated annealing method. The flux-surfaces are discerned by using two labels: one is classification of the magnetic field structure, i.e., core, island, ergodic, and outside regions, and the other is a value of the toroidal magnetic flux. We have applied the technique to an LHD configuration with the m/n=1/1 island, and successfully obtained the discrimination of the magnetic field structure. (author)
Smith, Richard D; Keogh-Brown, Marcus R; Barnett, Tony; Tait, Joyce
2009-11-19
To estimate the potential economic impact of pandemic influenza, associated behavioural responses, school closures, and vaccination on the United Kingdom. A computable general equilibrium model of the UK economy was specified for various combinations of mortality and morbidity from pandemic influenza, vaccine efficacy, school closures, and prophylactic absenteeism using published data. The 2004 UK economy (the most up to date available with suitable economic data). The economic impact of various scenarios with different pandemic severity, vaccination, school closure, and prophylactic absenteeism specified in terms of gross domestic product, output from different economic sectors, and equivalent variation. The costs related to illness alone ranged between 0.5% and 1.0% of gross domestic product ( pound8.4bn to pound16.8bn) for low fatality scenarios, 3.3% and 4.3% ( pound55.5bn to pound72.3bn) for high fatality scenarios, and larger still for an extreme pandemic. School closure increases the economic impact, particularly for mild pandemics. If widespread behavioural change takes place and there is large scale prophylactic absence from work, the economic impact would be notably increased with few health benefits. Vaccination with a pre-pandemic vaccine could save 0.13% to 2.3% of gross domestic product ( pound2.2bn to pound38.6bn); a single dose of a matched vaccine could save 0.3% to 4.3% ( pound5.0bn to pound72.3bn); and two doses of a matched vaccine could limit the overall economic impact to about 1% of gross domestic product for all disease scenarios. Balancing school closure against "business as usual" and obtaining sufficient stocks of effective vaccine are more important factors in determining the economic impact of an influenza pandemic than is the disease itself. Prophylactic absence from work in response to fear of infection can add considerably to the economic impact.
Computer vision techniques for rotorcraft low-altitude flight
Sridhar, Banavar; Cheng, Victor H. L.
1988-01-01
A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.
Iterative reconstruction techniques for computed tomography Part 1: Technical principles
International Nuclear Information System (INIS)
Willemink, Martin J.; Jong, Pim A. de; Leiner, Tim; Nievelstein, Rutger A.J.; Schilham, Arnold M.R.; Heer, Linda M. de; Budde, Ricardo P.J.
2013-01-01
To explain the technical principles of and differences between commercially available iterative reconstruction (IR) algorithms for computed tomography (CT) in non-mathematical terms for radiologists and clinicians. Technical details of the different proprietary IR techniques were distilled from available scientific articles and manufacturers' white papers and were verified by the manufacturers. Clinical results were obtained from a literature search spanning January 2006 to January 2012, including only original research papers concerning IR for CT. IR for CT iteratively reduces noise and artefacts in either image space or raw data, or both. Reported dose reductions ranged from 23 % to 76 % compared to locally used default filtered back-projection (FBP) settings, with similar noise, artefacts, subjective, and objective image quality. IR has the potential to allow reducing the radiation dose while preserving image quality. Disadvantages of IR include blotchy image appearance and longer computational time. Future studies need to address differences between IR algorithms for clinical low-dose CT. circle Iterative reconstruction technology for CT is presented in non-mathematical terms. (orig.)
Computer-aided auscultation learning system for nursing technique instruction.
Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih
2008-01-01
Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.
International Nuclear Information System (INIS)
Nimmon, C.C.; McAlister, J.M.; Hickson, B.; Cattell, W.R.
1975-01-01
A comparison of methods for calculating the renal clearance of EDTA from the plasma disappearance curve, after a single injection, has been made. Measurements were made on 38 patients, using external monitoring and venous blood sampling techniques, over a period of 24 h after an injection of 100 μCi of 51 Cr-EDTA. The results indicate that the period 3 - 6 h after injection is suitable for sampling the post-equilibrium part of the plasma disappearance curve for values of the glomerular filtration rate (GFR) in the range 0 - 140 ml/min. It was also found that, to within the individual measurement errors, the values of the clearance calculated by using the post-equilibrium period only (PES clearance) can be considered to show a constant proportionality to the values calculated by using the entire plasma disappearance curve (total clearance). (author)
Electrostatic afocal-zoom lens design using computer optimization technique
Energy Technology Data Exchange (ETDEWEB)
Sise, Omer, E-mail: omersise@gmail.com
2014-12-15
Highlights: • We describe the detailed design of a five-element electrostatic afocal-zoom lens. • The simplex optimization is used to optimize lens voltages. • The method can be applied to multi-element electrostatic lenses. - Abstract: Electron optics is the key to the successful operation of electron collision experiments where well designed electrostatic lenses are needed to drive electron beam before and after the collision. In this work, the imaging properties and aberration analysis of an electrostatic afocal-zoom lens design were investigated using a computer optimization technique. We have found a whole new range of voltage combinations that has gone unnoticed until now. A full range of voltage ratios and spherical and chromatic aberration coefficients were systematically analyzed with a range of magnifications between 0.3 and 3.2. The grid-shadow evaluation was also employed to show the effect of spherical aberration. The technique is found to be useful for searching the optimal configuration in a multi-element lens system.
Directory of Open Access Journals (Sweden)
Hua KL
2015-08-01
Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network
Computer processing of the scintigraphic image using digital filtering techniques
International Nuclear Information System (INIS)
Matsuo, Michimasa
1976-01-01
The theory of digital filtering was studied as a method for the computer processing of scintigraphic images. The characteristics and design techniques of finite impulse response (FIR) digital filters with linear phases were examined using the z-transform. The conventional data processing method, smoothing, could be recognized as one kind of linear phase FIR low-pass digital filtering. Ten representatives of FIR low-pass digital filters with various cut-off frequencies were scrutinized from the frequency domain in one-dimension and two-dimensions. These filters were applied to phantom studies with cold targets, using a Scinticamera-Minicomputer on-line System. These studies revealed that the resultant images had a direct connection with the magnitude response of the filter, that is, they could be estimated fairly well from the frequency response of the digital filter used. The filter, which was estimated from phantom studies as optimal for liver scintigrams using 198 Au-colloid, was successfully applied in clinical use for detecting true cold lesions and, at the same time, for eliminating spurious images. (J.P.N.)
A computational technique to measure fracture callus in radiographs.
Lujan, Trevor J; Madey, Steven M; Fitzpatrick, Dan C; Byrd, Gregory D; Sanderson, Jason M; Bottlang, Michael
2010-03-03
Callus formation occurs in the presence of secondary bone healing and has relevance to the fracture's mechanical environment. An objective image processing algorithm was developed to standardize the quantitative measurement of periosteal callus area in plain radiographs of long bone fractures. Algorithm accuracy and sensitivity were evaluated using surrogate models. For algorithm validation, callus formation on clinical radiographs was measured manually by orthopaedic surgeons and compared to non-clinicians using the algorithm. The algorithm measured the projected area of surrogate calluses with less than 5% error. However, error will increase when analyzing very small areas of callus and when using radiographs with low image resolution (i.e. 100 pixels per inch). The callus size extracted by the algorithm correlated well to the callus size outlined by the surgeons (R2=0.94, p<0.001). Furthermore, compared to clinician results, the algorithm yielded results with five times less inter-observer variance. This computational technique provides a reliable and efficient method to quantify secondary bone healing response. Copyright 2009 Elsevier Ltd. All rights reserved.
Borge, Javier
2015-01-01
G, G°, [delta][subscript r]G, [delta][subscript r]G°, [delta]G, and [delta]G° are essential quantities to master the chemical equilibrium. Although the number of publications devoted to explaining these items is extremely high, it seems that they do not produce the desired effect because some articles and textbooks are still being written with…
International Nuclear Information System (INIS)
Ko, Jong-Hwan.
1993-01-01
Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de
Projection computation based on pixel in simultaneous algebraic reconstruction technique
International Nuclear Information System (INIS)
Wang Xu; Chen Zhiqiang; Xiong Hua; Zhang Li
2005-01-01
SART is an important arithmetic of image reconstruction, in which the projection computation takes over half of the reconstruction time. An efficient way to compute projection coefficient matrix together with memory optimization is presented in this paper. Different from normal method, projection lines are located based on every pixel, and the following projection coefficient computation can make use of the results. Correlation of projection lines and pixels can be used to optimize the computation. (authors)
The analysis of gastric function using computational techniques
International Nuclear Information System (INIS)
Young, Paul
2002-01-01
The work presented in this thesis was carried out at the Magnetic Resonance Centre, Department of Physics and Astronomy, University of Nottingham, between October 1996 and June 2000. This thesis describes the application of computerised techniques to the analysis of gastric function, in relation to Magnetic Resonance Imaging data. The implementation of a computer program enabling the measurement of motility in the lower stomach is described in Chapter 6. This method allowed the dimensional reduction of multi-slice image data sets into a 'Motility Plot', from which the motility parameters - the frequency, velocity and depth of contractions - could be measured. The technique was found to be simple, accurate and involved substantial time savings, when compared to manual analysis. The program was subsequently used in the measurement of motility in three separate studies, described in Chapter 7. In Study 1, four different meal types of varying viscosity and nutrient value were consumed by 12 volunteers. The aim of the study was (i) to assess the feasibility of using the motility program in a volunteer study and (ii) to determine the effects of the meals on motility. The results showed that the parameters were remarkably consistent between the 4 meals. However, for each meal, velocity and percentage occlusion were found to increase as contractions propagated along the antrum. The first clinical application of the motility program was carried out in Study 2. Motility from three patients was measured, after they had been referred to the Magnetic Resonance Centre with gastric problems. The results showed that one of the patients displayed an irregular motility, compared to the results of the volunteer study. This result had not been observed using other investigative techniques. In Study 3, motility was measured in Low Viscosity and High Viscosity liquid/solid meals, with the solid particulate consisting of agar beads of varying breakdown strength. The results showed that
Bilgin, Mehmet Selim; Baytaroğlu, Ebru Nur; Erdem, Ali; Dilber, Erhan
2016-01-01
The aim of this review was to investigate usage of computer-aided design/computer-aided manufacture (CAD/CAM) such as milling and rapid prototyping (RP) technologies for removable denture fabrication. An electronic search was conducted in the PubMed/MEDLINE, ScienceDirect, Google Scholar, and Web of Science databases. Databases were searched from 1987 to 2014. The search was performed using a variety of keywords including CAD/CAM, complete/partial dentures, RP, rapid manufacturing, digitally designed, milled, computerized, and machined. The identified developments (in chronological order), techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication are summarized. Using a variety of keywords and aiming to find the topic, 78 publications were initially searched. For the main topic, the abstract of these 78 articles were scanned, and 52 publications were selected for reading in detail. Full-text of these articles was gained and searched in detail. Totally, 40 articles that discussed the techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication and the articles were incorporated in this review. Totally, 16 of the papers summarized in the table. Following review of all relevant publications, it can be concluded that current innovations and technological developments of CAD/CAM and RP allow the digitally planning and manufacturing of removable dentures from start to finish. As a result according to the literature review CAD/CAM techniques and supportive maxillomandibular relationship transfer devices are growing fast. In the close future, fabricating removable dentures will become medical informatics instead of needing a technical staff and procedures. However the methods have several limitations for now. PMID:27095912
Computer vision techniques for rotorcraft low altitude flight
Sridhar, Banavar
1990-01-01
Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.
Computer technique for correction of nonhomogeneous distribution in radiologic images
International Nuclear Information System (INIS)
Florian, Rogerio V.; Frere, Annie F.; Schiable, Homero; Marques, Paulo M.A.; Marques, Marcio A.
1996-01-01
An image processing technique to provide a 'Heel' effect compensation on medical images is presented. It is reported that the technique can improve the structures detection due to background homogeneity and can be used for any radiologic system
Neutron visual sensing techniques making good use of computer science
International Nuclear Information System (INIS)
Kureta, Masatoshi
2009-01-01
Neutron visual sensing technique is one of the nondestructive visualization and image-sensing techniques. In this article, some advanced neutron visual sensing techniques are introduced. The most up-to-date high-speed neutron radiography, neutron 3D CT, high-speed scanning neutron 3D/4D CT and multi-beam neutron 4D CT techniques are included with some fundamental application results. Oil flow in a car engine was visualized by high-speed neutron radiography technique to make clear the unknown phenomena. 4D visualization of pained sand in the sand glass was reported as the demonstration of the high-speed scanning neutron 4D CT technique. The purposes of the development of these techniques are to make clear the unknown phenomena and to measure the void fraction, velocity etc. with high-speed or 3D/4D for many industrial applications. (author)
Wei, Xuelei; Dong, Fuhui
2011-12-01
To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.
Larson, V. H.
1982-01-01
The basic equations that are used to describe the physical phenomena in a Stirling cycle engine are the general energy equations and equations for the conservation of mass and conversion of momentum. These equations, together with the equation of state, an analytical expression for the gas velocity, and an equation for mesh temperature are used in this computer study of Stirling cycle characteristics. The partial differential equations describing the physical phenomena that occurs in a Stirling cycle engine are of the hyperbolic type. The hyperbolic equations have real characteristic lines. By utilizing appropriate points along these curved lines the partial differential equations can be reduced to ordinary differential equations. These equations are solved numerically using a fourth-fifth order Runge-Kutta integration technique.
Analysis of Piezoelectric Structural Sensors with Emergent Computing Techniques
Ramers, Douglas L.
2005-01-01
The purpose of this project was to try to interpret the results of some tests that were performed earlier this year and to demonstrate a possible use of emergence in computing to solve IVHM problems. The test data used was collected with piezoelectric sensors to detect mechanical changes in structures. This project team was included of Dr. Doug Ramers and Dr. Abdul Jallob of the Summer Faculty Fellowship Program, Arnaldo Colon-Lopez - a student intern from the University of Puerto Rico of Turabo, and John Lassister and Bob Engberg of the Structural and Dynamics Test Group. The tests were performed by Bob Engberg to compare the performance two types of piezoelectric (piezo) sensors, Pb(Zr(sub 1-1)Ti(sub x))O3, which we will label PZT, and Pb(Zn(sub 1/3)Nb(sub 2/3))O3-PbTiO, which we will label SCP. The tests were conducted under varying temperature and pressure conditions. One set of tests was done by varying water pressure inside an aluminum liner covered with carbon-fiber composite layers (a cylindrical "bottle" with domed ends) and the other by varying temperatures down to cryogenic levels on some specially prepared composite panels. This report discusses the data from the pressure study. The study of the temperature results was not completed in time for this report. The particular sensing done with these piezo sensors is accomplished by the sensor generating an controlled vibration that is transmitted into the structure to which the sensor is attached, and the same sensor then responding to the induced vibration of the structure. There is a relationship between the mechanical impedance of the structure and the resulting electrical impedance produced in the in the piezo sensor. The impedance is also a function of the excitation frequency. Changes in the real part of impendance signature relative to an original reference signature indicate a change in the coupled structure that could be the results of damage or strain. The water pressure tests were conducted by
Computer-assisted techniques to evaluate fringe patterns
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Large-scale computing techniques for complex system simulations
Dubitzky, Werner; Schott, Bernard
2012-01-01
Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and
Seismic activity prediction using computational intelligence techniques in northern Pakistan
Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat
2017-10-01
Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.
Microwave integrated circuit mask design, using computer aided microfilm techniques
Energy Technology Data Exchange (ETDEWEB)
Reymond, J.M.; Batliwala, E.R.; Ajose, S.O.
1977-01-01
This paper examines the possibility of using a computer interfaced with a precision film C.R.T. information retrieval system, to produce photomasks suitable for the production of microwave integrated circuits.
Advanced computer graphics techniques as applied to the nuclear industry
International Nuclear Information System (INIS)
Thomas, J.J.; Koontz, A.S.
1985-08-01
Computer graphics is a rapidly advancing technological area in computer science. This is being motivated by increased hardware capability coupled with reduced hardware costs. This paper will cover six topics in computer graphics, with examples forecasting how each of these capabilities could be used in the nuclear industry. These topics are: (1) Image Realism with Surfaces and Transparency; (2) Computer Graphics Motion; (3) Graphics Resolution Issues and Examples; (4) Iconic Interaction; (5) Graphic Workstations; and (6) Data Fusion - illustrating data coming from numerous sources, for display through high dimensional, greater than 3-D, graphics. All topics will be discussed using extensive examples with slides, video tapes, and movies. Illustrations have been omitted from the paper due to the complexity of color reproduction. 11 refs., 2 figs., 3 tabs
Ruiz-Muelle, Ana Belén; Oña-Burgos, Pascual; Ortuño, Manuel A; Oltra, J Enrique; Rodríguez-García, Ignacio; Fernández, Ignacio
2016-02-12
The synthesis and structural characterization of allenyl titanocene(IV) [TiClCp2 (CH=C=CH2 )] 3 and propargyl titanocene(IV) [TiClCp2 (CH2 -C≡C-(CH2 )4 CH3 )] 9 have been described for the first time. Advanced NMR methods including diffusion NMR methods (diffusion pulsed field gradient stimulated spin echo (PFG-STE) and DOSY) have been applied and established that these organometallics are monomers in THF solution with hydrodynamic radii (from the Stokes-Einstein equation) of 3.5 and 4.1 Å for 3 and 9, respectively. Full (1) H, (13) C, Δ(1) H, and Δ(13) C NMR data are given, and through the analysis of the Ramsey equation, the first electronic insights into these derivatives are provided. In solution, they are involved in their respective metallotropic allenyl-propargyl equilibria that, after quenching experiments with aromatic and aliphatic aldehydes, ketones, and protonating agents, always give the propargyl products P (when carbonyls are employed), or allenyl products A (when a proton source is added) as the major isomers. In all the cases assayed, the ratio of products suggests that the metallotropic equilibrium should be faster than the reactions of 3 and 9 with electrophiles. Indeed, DFT calculations predict lower Gibbs energy barriers for the metallotropic equilibrium, thus confirming dynamic kinetic resolution. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Application of computer technique in the reconstruction of Chinese ancient buildings
Li, Deren; Yang, Jie; Zhu, Yixuan
2003-01-01
This paper offers an introduction of computer assemble and simulation of ancient building. A pioneer research work was carried out by investigators of surveying and mapping describing ancient Chinese timber buildings by 3D frame graphs with computers. But users can know the structural layers and the assembly process of these buildings if the frame graphs are processed further with computer. This can be implemented by computer simulation technique. This technique display the raw data on the screen of a computer and interactively manage them by combining technologies from computer graphics and image processing, multi-media technology, artificial intelligence, highly parallel real-time computation technique and human behavior science. This paper presents the implement procedure of simulation for large-sized wooden buildings as well as 3D dynamic assembly of these buildings under the 3DS MAX environment. The results of computer simulation are also shown in the paper.
Jameson, A. Keith
Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on Le Chatelier's principle includes objectives, prerequisites, pretest, instructions for executing the computer program, and…
Spezia, Riccardo; Martínez-Nuñez, Emilio; Vazquez, Saulo; Hase, William L
2017-04-28
In this Introduction, we show the basic problems of non-statistical and non-equilibrium phenomena related to the papers collected in this themed issue. Over the past few years, significant advances in both computing power and development of theories have allowed the study of larger systems, increasing the time length of simulations and improving the quality of potential energy surfaces. In particular, the possibility of using quantum chemistry to calculate energies and forces 'on the fly' has paved the way to directly study chemical reactions. This has provided a valuable tool to explore molecular mechanisms at given temperatures and energies and to see whether these reactive trajectories follow statistical laws and/or minimum energy pathways. This themed issue collects different aspects of the problem and gives an overview of recent works and developments in different contexts, from the gas phase to the condensed phase to excited states.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
Energy Technology Data Exchange (ETDEWEB)
Joh, Seung Hun; Dellink, Rob; Nam, Yunmi; Kim, Yong Gun; Song, Yang Hoon [Korea Environment Institute, Seoul (Korea)
2000-12-01
In the beginning of the 21st century, climate change is one of hottest issues in arena of both international environment and domestic one. During the COP6 meeting held in The Hague, over 10,000 people got together from the world. This report is a series of policy study on climate change in context of Korea. This study addresses on interactions of economy and environment in a perfect foresight dynamic computable general equilibrium with a focus on greenhouse gas mitigation strategy in Korea. The primary goal of this study is to evaluate greenhouse gas mitigation portfolios of changes in timing and magnitude with a particular focus on developing a methodology to integrate the bottom-up information on technical measures to reduce pollution into a top-down multi-sectoral computable general equilibrium framework. As a non-Annex I country Korea has been under strong pressure to declare GHG reduction commitment. Of particular concern is economic consequences GHG mitigation would accrue to the society. Various economic assessment have been carried out to address on the issue including analyses on cost, ancillary benefit, emission trading, so far. In this vein, this study on GHG mitigation commitment is a timely answer to climate change policy field. Empirical results available next year would be highly demanded in the situation. 62 refs., 13 figs., 9 tabs.
International Nuclear Information System (INIS)
Scrieciu, S. Serban
2007-01-01
The search for methods of assessment that best evaluate and integrate the trade-offs and interactions between the economic, environmental and social components of development has been receiving a new impetus due to the requirement that sustainability concerns be incorporated into the policy formulation process. A paper forthcoming in Ecological Economics (Boehringer, C., Loeschel, A., in press. Computable general equilibrium models for sustainability impact assessment: status quo and prospects, Ecological Economics.) claims that Computable General Equilibrium (CGE) models may potentially represent the much needed 'back-bone' tool to carry out reliable integrated quantitative Sustainability Impact Assessments (SIAs). While acknowledging the usefulness of CGE models for some dimensions of SIA, this commentary questions the legitimacy of employing this particular economic modelling tool as a single integrating modelling framework for a comprehensive evaluation of the multi-dimensional, dynamic and complex interactions between policy and sustainability. It discusses several inherent dangers associated with the advocated prospects for the CGE modelling approach to contribute to comprehensive and reliable sustainability impact assessments. The paper warns that this reductionist viewpoint may seriously infringe upon the basic values underpinning the SIA process, namely a transparent, heterogeneous, balanced, inter-disciplinary, consultative and participatory take to policy evaluation and building of the evidence-base. (author)
Mansbach, Rachael; Ferguson, Andrew
Self-assembling π-conjugated peptides are attractive candidates for the fabrication of bioelectronic materials possessing optoelectronic properties due to electron delocalization over the conjugated peptide groups. We present a computational and theoretical study of an experimentally-realized optoelectronic peptide that displays triggerable assembly in low pH to resolve the microscopic effects of flow and pH on the non-equilibrium morphology and kinetics of assembly. Using a combination of molecular dynamics simulations and hydrodynamic modeling, we quantify the time and length scales at which convective flows employed in directed assembly compete with microscopic diffusion to influence assembly. We also show that there is a critical pH below which aggregation proceeds irreversibly, and quantify the relationship between pH, charge density, and aggregate size. Our work provides new fundamental understanding of pH and flow of non-equilibrium π-conjugated peptide assembly, and lays the groundwork for the rational manipulation of environmental conditions and peptide chemistry to control assembly and the attendant emergent optoelectronic properties. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award # DE-SC0011847, and by the Computational Science and Engineering Fellowship from the University of Illinois at Urbana-Champaign.
New technique for determining unavailability of computer controlled safety systems
International Nuclear Information System (INIS)
Fryer, M.O.; Bruske, S.Z.
1984-04-01
The availability of a safety system for a fusion reactor is determined. A fusion reactor processes tritium and requires an Emergency Tritium Cleanup (ETC) system for accidental tritium releases. The ETC is computer controlled and because of its complexity, is an excellent candidate for this analysis. The ETC system unavailability, for preliminary untested software, is calculated based on different assumptions about operator response. These assumptions are: (a) the operator shuts down the system after the first indication of plant failure; (b) the operator shuts down the system after following optimized failure verification procedures; or (c) the operator is taken out of the decision process, and the computer uses the optimized failure verification procedures
Security Techniques for protecting data in Cloud Computing
Maddineni, Venkata Sravan Kumar; Ragi, Shivashanker
2012-01-01
Context: From the past few years, there has been a rapid progress in Cloud Computing. With the increasing number of companies resorting to use resources in the Cloud, there is a necessity for protecting the data of various users using centralized resources. Some major challenges that are being faced by Cloud Computing are to secure, protect and process the data which is the property of the user. Aims and Objectives: The main aim of this research is to understand the security threats and ident...
Applications of NLP Techniques to Computer-Assisted Authoring of Test Items for Elementary Chinese
Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun
2010-01-01
The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…
Computational techniques in tribology and material science at the atomic level
Ferrante, J.; Bozzolo, G. H.
1992-01-01
Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.
Mulder, T. E.; Baatsen, M. L.J.; Wubs, F.W.; Dijkstra, H. A.
2017-01-01
In the field of paleoceanographic modeling, the different positioning of Earth's continental configurations is often a major challenge for obtaining equilibrium ocean flow solutions. In this paper, we introduce numerical parameter continuation techniques to compute equilibrium solutions of ocean
Can markets compute equilibria?
Monroe , Hunter K
2009-01-01
Recent turmoil in financial and commodities markets has renewed questions regarding how well markets discover equilibrium prices, particularly when those markets are highly complex. A relatively new critique questions whether markets can realistically find equilibrium prices if computers cannot. For instance, in a simple exchange economy with Leontief preferences, the time required to compute equilibrium prices using the fastest known techniques is an exponential function of the number of goods. Furthermore, no efficient technique for this problem exists if a famous mathematical conjecture is
Software Engineering Techniques for Computer-Aided Learning.
Ibrahim, Bertrand
1989-01-01
Describes the process for developing tutorials for computer-aided learning (CAL) using a programing language rather than an authoring system. The workstation used is described, the use of graphics is discussed, the role of a local area network (LAN) is explained, and future plans are discussed. (five references) (LRW)
International Nuclear Information System (INIS)
Chen, R.L.W.
1981-01-01
The use of an equation obtained earlier for computing electrical conductivity may be extended to partially ionized gases which depart from the Saha equation, provided the electron velocity distributions do not deviate from the Maxwellian distribution. (author)
International Nuclear Information System (INIS)
Pratt, L.R.; Haan, S.W.
1981-01-01
The theory of the previous paper is used to predict anomalous size effects observed for computer simulated liquid Ar. The theoretical results for the boundary condition induced anisotropy of two-particle correlations are found to be large, and in excellent agreement with the computer experimental data of Mandell for densities near the Ar triple point density. The agreement is less good at higher densities
The development of a computer technique for the investigation of reactor lattice parameters
International Nuclear Information System (INIS)
Joubert, W.R.
1982-01-01
An integrated computer technique was developed whereby all the computer programmes needed to calculate reactor lattice parameters from basic neutron data, could be combined in one system. The theory of the computer programmes is explained in detail. Results are given and compared with experimental values as well as those calculated with a standard system
International Nuclear Information System (INIS)
Dai Guiliang
1988-01-01
The increasing needs for computers in the area of nuclear science and technology are described. The current status of commerical availabe computer products of different scale in world market are briefly reviewed. A survey of some noticeable techniques is given from the view point of computer applications in nuclear science research laboratories
The practical use of computer graphics techniques for site characterization
International Nuclear Information System (INIS)
Tencer, B.; Newell, J.C.
1982-01-01
In this paper the authors describe the approach utilized by Roy F. Weston, Inc. (WESTON) to analyze and characterize data relative to a specific site and the computerized graphical techniques developed to display site characterization data. These techniques reduce massive amounts of tabular data to a limited number of graphics easily understood by both the public and policy level decision makers. First, they describe the general design of the system; then the application of this system to a low level rad site followed by a description of an application to an uncontrolled hazardous waste site
Application of computational intelligence techniques for load shedding in power systems: A review
International Nuclear Information System (INIS)
Laghari, J.A.; Mokhlis, H.; Bakar, A.H.A.; Mohamad, Hasmaini
2013-01-01
Highlights: • The power system blackout history of last two decades is presented. • Conventional load shedding techniques, their types and limitations are presented. • Applications of intelligent techniques in load shedding are presented. • Intelligent techniques include ANN, fuzzy logic, ANFIS, genetic algorithm and PSO. • The discussion and comparison between these techniques are provided. - Abstract: Recent blackouts around the world question the reliability of conventional and adaptive load shedding techniques in avoiding such power outages. To address this issue, reliable techniques are required to provide fast and accurate load shedding to prevent collapse in the power system. Computational intelligence techniques, due to their robustness and flexibility in dealing with complex non-linear systems, could be an option in addressing this problem. Computational intelligence includes techniques like artificial neural networks, genetic algorithms, fuzzy logic control, adaptive neuro-fuzzy inference system, and particle swarm optimization. Research in these techniques is being undertaken in order to discover means for more efficient and reliable load shedding. This paper provides an overview of these techniques as applied to load shedding in a power system. This paper also compares the advantages of computational intelligence techniques over conventional load shedding techniques. Finally, this paper discusses the limitation of computational intelligence techniques, which restricts their usage in load shedding in real time
A technique for computing bowing reactivity feedback in LMFBR's
International Nuclear Information System (INIS)
Finck, P.J.
1987-01-01
During normal or accidental transients occurring in a LMFBR core, the assemblies and their support structure are subjected to important thermal gradients which induce differential thermal expansions of the walls of the hexcans and differential displacement of the assembly support structure. These displacements, combined with the creep and swelling of structural materials, remain quite small, but the resulting reactivity changes constitute a significant component of the reactivity feedback coefficients used in safety analyses. It would be prohibitive to compute the reactivity changes due to all transients. Thus, the usual practice is to generate reactivity gradient tables. The purpose of the work presented here is twofold: develop and validate an efficient and accurate scheme for computing these reactivity tables; and to qualify this scheme
Low Power system Design techniques for mobile computers
Havinga, Paul J.M.; Smit, Gerardus Johannes Maria
1997-01-01
Portable products are being used increasingly. Because these systems are battery powered, reducing power consumption is vital. In this report we give the properties of low power design and techniques to exploit them on the architecture of the system. We focus on: min imizing capacitance, avoiding
Optimizing Nuclear Reactor Operation Using Soft Computing Techniques
Entzinger, J.O.; Ruan, D.; Kahraman, Cengiz
2006-01-01
The strict safety regulations for nuclear reactor control make it di±cult to implement new control techniques such as fuzzy logic control (FLC). FLC however, can provide very desirable advantages over classical control, like robustness, adaptation and the capability to include human experience into
An appraisal of computational techniques for transient heat conduction equation
International Nuclear Information System (INIS)
Kant, T.
1983-01-01
A semi-discretization procedure in which the ''space'' dimension is discretized by the finite element method is emphasized for transient problems. This standard methodology transforms the space-time partial differential equation (PDE) system into a set of ordinary differential equations (ODE) in time. Existing methods for transient heat conduction calculations are then reviewed. Existence of two general classes of time integration schemes- implicit and explicit is noted. Numerical stability characteristics of these two methods are elucidated. Implicit methods are noted to be numerically stable, permitting large time steps, but the cost per step is high. On the otherhand, explicit schemes are noted to be inexpensive per step, but small step size is required. Low computational cost of the explicit schemes make it very attractive for nonlinear problems. However, numerical stability considerations requiring use of very small time steps come in the way of its general adoption. Effectiveness of the fourth-order Runge-Kutta-Gill explicit integrator is then numerically evaluated. Finally we discuss some very recent works on development of computational algorithms which not only achieve unconditional stability, high accuracy and convergence but involve computations on matrix equations of elements only. This development is considered to be very significant in the light of our experience gained for simple heat conduction calculations. We conclude that such algorithms have the potential for further developments leading to development of economical methods for general transient analysis of complex physical systems. (orig.)
Computational techniques in gamma-ray skyshine analysis
International Nuclear Information System (INIS)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified to use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs
Dewdney, A. K.
1988-01-01
Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)
Energy Technology Data Exchange (ETDEWEB)
Anon.
1984-12-15
From 3-6 September the First International Workshop on Local Equilibrium in Strong Interaction Physics took place in Bad-Honnef at the Physics Centre of the German Physical Society. A number of talks covered the experimental and theoretical investigation of the 'hotspots' effect, both in high energy particle physics and in intermediate energy nuclear physics.
van Damme, E.E.C.
2000-01-01
An outcome in a noncooperative game is said to be self-enforcing, or a strategic equilibrium, if, whenever it is recommended to the players, no player has an incentive to deviate from it.This paper gives an overview of the concepts that have been proposed as formalizations of this requirement and of
Ismail, M.S.
2014-01-01
We introduce a new concept which extends von Neumann and Morgenstern's maximin strategy solution by incorporating `individual rationality' of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that
Virtual reality in medicine-computer graphics and interaction techniques.
Haubner, M; Krapichler, C; Lösch, A; Englmeier, K H; van Eimeren, W
1997-03-01
This paper describes several new visualization and interaction techniques that enable the use of virtual environments for routine medical purposes. A new volume-rendering method supports shaded and transparent visualization of medical image sequences in real-time with an interactive threshold definition. Based on these rendering algorithms two complementary segmentation approaches offer an intuitive assistance for a wide range of requirements in diagnosis and therapy planning. In addition, a hierarchical data representation for geometric surface descriptions guarantees an optimal use of available hardware resources and prevents inaccurate visualization. The combination of the presented techniques empowers the improved human-machine interface of virtual reality to support every interactive task in medical three-dimensional (3-D) image processing, from visualization of unsegmented data volumes up to the simulation of surgical procedures.
Securing the Cloud Cloud Computer Security Techniques and Tactics
Winkler, Vic (JR)
2011-01-01
As companies turn to cloud computing technology to streamline and save money, security is a fundamental concern. Loss of certain control and lack of trust make this transition difficult unless you know how to handle it. Securing the Cloud discusses making the move to the cloud while securing your peice of it! The cloud offers felxibility, adaptability, scalability, and in the case of security-resilience. This book details the strengths and weaknesses of securing your company's information with different cloud approaches. Attacks can focus on your infrastructure, communications network, data, o
Techniques for animation of CFD results. [computational fluid dynamics
Horowitz, Jay; Hanson, Jeffery C.
1992-01-01
Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.
A textbook of computer based numerical and statistical techniques
Jaiswal, AK
2009-01-01
About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in `C` langu
International Nuclear Information System (INIS)
Goulo, V.G.
1988-01-01
This document describes the content of the diskettes with nuclear data production codes SCAT2 and STAPRE and the example data set for implementing and testing of these codes for personal computers IBM/AT. They are available on two diskettes, free fo charge, upon request from the NEA Data Bank, Saclay, France. (author). 4 refs, 1 fig
Horseshoes in a Chaotic System with Only One Stable Equilibrium
Huan, Songmei; Li, Qingdu; Yang, Xiao-Song
To confirm the numerically demonstrated chaotic behavior in a chaotic system with only one stable equilibrium reported by Wang and Chen, we resort to Poincaré map technique and present a rigorous computer-assisted verification of horseshoe chaos by virtue of topological horseshoes theory.
Equilibrium shoreface profiles
DEFF Research Database (Denmark)
Aagaard, Troels; Hughes, Michael G
2017-01-01
Large-scale coastal behaviour models use the shoreface profile of equilibrium as a fundamental morphological unit that is translated in space to simulate coastal response to, for example, sea level oscillations and variability in sediment supply. Despite a longstanding focus on the shoreface...... profile and its relevance to predicting coastal response to changing environmental conditions, the processes and dynamics involved in shoreface equilibrium are still not fully understood. Here, we apply a process-based empirical sediment transport model, combined with morphodynamic principles to provide......; there is no tuning or calibration and computation times are short. It is therefore easily implemented with repeated iterations to manage uncertainty....
An Efficient Computational Technique for Fractal Vehicular Traffic Flow
Directory of Open Access Journals (Sweden)
Devendra Kumar
2018-04-01
Full Text Available In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method. Some illustrative examples are taken to describe the success of the suggested techniques. The results derived with the aid of the suggested schemes reveal that the present schemes are very efficient for obtaining the non-differentiable solution to fractal vehicular traffic flow problem.
Techniques for grid manipulation and adaptation. [computational fluid dynamics
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
Computer tomography as a diagnostic technique in psychiatry
Energy Technology Data Exchange (ETDEWEB)
Strobl, G.; Reisner, T.; Zeiler, K. (Vienna Univ. (Austria). Psychiatrische Klinik; Vienna Univ. (Austria). Neurologische Klinik)
1980-01-01
CT findings in 516 hospitalized psychiatric patients are presented. The patients were classified in 9 groups according to a modified ICD classification, and type and incidence of pathological findings - almost exclusively degenerative processes of the brain - were registered. Diffuse cerebral atrophies are most frequent in the groups alcoholism and alcohol psychoses (44.0%) and psychoses and mental disturbances accompanying physical diseases . In schizophrenics, (almost exclusively residual and defect states) and in patients with affective psychosis diffuse cerebral atrophies are much less frequent (11.3% and 9.2%) than stated in earlier publications. Neurosis, changes in personality, or abnormal behaviour are hardly ever accompanied by cerebral atrophy. Problems encountered in the attempt to establish objective criteria for a diagnosis of cerebral atrophy on the basis of CT pictures are discussed. The computed tomograph does not permit conclusions on the etiology of diffuse atrophic processes.
Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model
Directory of Open Access Journals (Sweden)
Rory A. Roberts
2014-01-01
Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.
Computer tomography as a diagnostic technique in psychiatry
International Nuclear Information System (INIS)
Strobl, G.; Reisner, T.; Zeiler, K.; Vienna Univ.
1980-01-01
CT findings in 516 hospitalized psychiatric patients are presented. The patients were classified in 9 groups according to a modified ICD classification, and type and incidence of pathological findings - almost exclusively degenerative processes of the brain - were registered. Diffuse cerebral atrophies are most frequent in the groups alcoholism and alcohol psychoses (44.0%) and psychoses and mental disturbances accompanying physical diseases. In schizophrenics, (almost exclusively residual and defect states) and in patients with affective psychosis diffuse cerebral atrophies are much less frequent (11.3% and 9.2%) than stated in earlier publications. Neurosis, changes in personality, or abnormal behaviour are hardly ever accompanied by cerebral atrophy. Problems encountered in the attempt to establish objective criteria for a diagnosis of cerebral atrophy on the basis of CT pictures are discussed. The computed tomograph does not permit conclusions on the etiology of diffuse atrophic processes. (orig.) [de
International Nuclear Information System (INIS)
Tsuzuki, T.; Toi, K.; Matsuura, K.
1991-04-01
A feedback control system aided by a personal computer is developed to maintain plasma position on the required position in the JIPP T-IIU tokamak. The personal computer enables to adjust various control parameters easily. In this control system, a control demand for driving the power supply of feedback controlled vertical field coils is composed to be proportional to a total plasma current. This system has been successfully employed throughout the discharge where the plasma current substantially changes from zero to hundreds of kiloamperes, because the feedback control can be done, being independent of the plasma current. The analysis of this feedback control system taken into account of digital sampling agrees well with the experimental results. (author)
Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries
Narendra Kumar; Shikha Jain
2012-01-01
The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...
Chau, Nancy H.
2009-01-01
This paper presents a capability-augmented model of on the job search, in which sweatshop conditions stifle the capability of the working poor to search for a job while on the job. The augmented setting unveils a sweatshop equilibrium in an otherwise archetypal Burdett-Mortensen economy, and reconciles a number of oft noted yet perplexing features of sweatshop economies. We demonstrate existence of multiple rational expectation equilibria, graduation pathways out of sweatshops in complete abs...
International Nuclear Information System (INIS)
Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini
2014-01-01
Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system
Industrial radiography with Ir-192 using computed radiographic technique
International Nuclear Information System (INIS)
Ngernvijit, Narippawaj; Punnachaiya, Suvit; Chankow, Nares; Sukbumperng, Ampai; Thong-Aram, Decho
2003-01-01
The aim of this research is to study the utilization of a low activity Ir-192 gamma source for industrial radiographic testing using the Computed Radiography (CR) system. Due to a photo-salbutamol Imaging Plate (I P) using in CR is much more radiation sensitive than a type II film with lead foil intensifying screen, the exposure time with CR can be significantly reduced. For short-lived gamma-ray source like Ir-192 source, the exposure time must be proportionally increased until it is not practical particularly for thick specimens. Generally, when the source decays to an activity of about 5 Ci or less, it will be returned to the manufacturer as a radioactive waste. In this research, the optimum conditions for radiography of a 20 mm thick welded steel sample with 2.4 Ci Ir-192 was investigated using the CR system with high resolution image plate, i.e. type Bas-SR of the Fuji Film Co. Ltd. The I P was sandwiched by a pair of 0.25 mm thick Pb intensifying sere en. Low energy scattered radiations was filtered by placing another Pb sheet with a thickness of 3 mm under the cassette. It was found that the CR image could give a contrast sensitivity of 2.5 % using only 3-minute exposure time which was comparable to the image taken by the type II film with Pb intensifying screen using the exposure time of 45 minutes
Vogt, Natalja; Marochkin, Ilya I; Rykov, Anatolii N
2018-04-18
The accurate molecular structure of picolinic acid has been determined from experimental data and computed at the coupled cluster level of theory. Only one conformer with the O[double bond, length as m-dash]C-C-N and H-O-C[double bond, length as m-dash]O fragments in antiperiplanar (ap) positions, ap-ap, has been detected under conditions of the gas-phase electron diffraction (GED) experiment (Tnozzle = 375(3) K). The semiexperimental equilibrium structure, rsee, of this conformer has been derived from the GED data taking into account the anharmonic vibrational effects estimated from the ab initio force field. The equilibrium structures of the two lowest-energy conformers, ap-ap and ap-sp (with the synperiplanar H-O-C[double bond, length as m-dash]O fragment), have been fully optimized at the CCSD(T)_ae level of theory in conjunction with the triple-ζ basis set (cc-pwCVTZ). The quality of the optimized structures has been improved due to extrapolation to the quadruple-ζ basis set. The high accuracy of both GED determination and CCSD(T) computations has been disclosed by a correct comparison of structures having the same physical meaning. The ap-ap conformer has been found to be stabilized by the relatively strong NH-O hydrogen bond of 1.973(27) Å (GED) and predicted to be lower in energy by 16 kJ mol-1 with respect to the ap-sp conformer without a hydrogen bond. The influence of this bond on the structure of picolinic acid has been analyzed within the Natural Bond Orbital model. The possibility of the decarboxylation of picolinic acid has been considered in the GED analysis, but no significant amounts of pyridine and carbon dioxide could be detected. To reveal the structural changes reflecting the mesomeric and inductive effects due to the carboxylic substituent, the accurate structure of pyridine has been also computed at the CCSD(T)_ae level with basis sets from triple- to 5-ζ quality. The comprehensive structure computations for pyridine as well as for
Computational modelling of the HyperVapotron cooling technique
Energy Technology Data Exchange (ETDEWEB)
Milnes, Joseph, E-mail: Joe.Milnes@ccfe.ac.uk [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Burns, Alan [School of Process Material and Environmental Engineering, CFD Centre, University of Leeds, Leeds, LS2 9JT (United Kingdom); ANSYS UK, Milton Park, Oxfordshire (United Kingdom); Drikakis, Dimitris [Department of Engineering Physics, Cranfield University, Cranfield, MK43 0AL (United Kingdom)
2012-09-15
Highlights: Black-Right-Pointing-Pointer The heat transfer mechanisms within a HyperVapotron are examined. Black-Right-Pointing-Pointer A multiphase, CFD model is developed. Black-Right-Pointing-Pointer Modelling choices for turbulence and wall boiling are evaluated. Black-Right-Pointing-Pointer Considerable improvements in accuracy are found compared to standard boiling models. Black-Right-Pointing-Pointer The model should enable significant virtual prototyping to be performed. - Abstract: Efficient heat transfer technologies are essential for magnetically confined fusion reactors; this applies to both the current generation of experimental reactors as well as future power plants. A number of High Heat Flux devices have therefore been developed specifically for this application. One of the most promising candidates is the HyperVapotron, a water cooled device which relies on internal fins and boiling heat transfer to maximise the heat transfer capability. Over the past 30 years, numerous variations of the HyperVapotron have been built and tested at fusion research centres around the globe resulting in devices that can now sustain heat fluxes in the region of 20-30 MW/m{sup 2} in steady state. Until recently, there had been few attempts to model or understand the internal heat transfer mechanisms responsible for this exceptional performance with the result that design improvements have been traditionally sought experimentally which is both inefficient and costly. This paper presents the successful attempt to develop an engineering model of the HyperVapotron device using customisation of commercial Computational Fluid Dynamics software. To establish the most appropriate modelling choices, in-depth studies were performed examining the turbulence models (within the Reynolds Averaged Navier Stokes framework), near wall methods, grid resolution and boiling submodels. Comparing the CFD solutions with HyperVapotron experimental data suggests that a RANS-based, multiphase
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
Wang, Guizhi; Gu, SaiJu; Chen, Jibo; Wu, Xianhua; Yu, Jun
2016-12-01
Assessment of the health and economic impacts of PM2.5 pollution is of great importance for urban air pollution prevention and control. In this study, we evaluate the damage of PM2.5 pollution using Beijing as an example. First, we use exposure-response functions to estimate the adverse health effects due to PM2.5 pollution. Then, the corresponding labour loss and excess medical expenditure are computed as two conducting variables. Finally, different from the conventional valuation methods, this paper introduces the two conducting variables into the computable general equilibrium (CGE) model to assess the impacts on sectors and the whole economic system caused by PM2.5 pollution. The results show that, substantial health effects of the residents in Beijing from PM2.5 pollution occurred in 2013, including 20,043 premature deaths and about one million other related medical cases. Correspondingly, using the 2010 social accounting data, Beijing gross domestic product loss due to the health impact of PM2.5 pollution is estimated as 1286.97 (95% CI: 488.58-1936.33) million RMB. This demonstrates that PM2.5 pollution not only has adverse health effects, but also brings huge economic loss.
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Luca Anderlini; Daniele Terlizzese
2009-01-01
We build a simple model of trust as an equilibrium phenomenon, departing from standard "selfish" preferences in a minimal way. Agents who are on the receiving end of an other to transact can choose whether to cheat and take away the entire surplus, taking into account a "cost of cheating." The latter has an idiosyncratic component (an agent's type), and a socially determined one. The smaller the mass of agents who cheat, the larger the cost of cheating suffered by those who cheat. Depending o...
Alexander, Melody W.; Arp, Larry W.
1997-01-01
A survey of 260 secondary and 251 postsecondary business educators found the former more likely to think computer ergonomic techniques should taught in elementary school and to address the hazards of improper use. Both groups stated that over half of students they observe do not use good techniques and agreed that students need continual…
International Nuclear Information System (INIS)
Baldock, Clive
2004-01-01
Since Gore et al published their paper on Fricke gel dosimetry, the predominant method of evaluation of both Fricke and polymer gel dosimeters has been magnetic resonance imaging (MRI). More recently optical computer tomography (CT) has also been a favourable evaluation method. Other techniques have been explored and developed as potential evaluation techniques in gel dosimetry. This paper reviews these other developments
International Nuclear Information System (INIS)
Sprugmann, K.W.; Ritchie, I.G.
1980-04-01
A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)
3D equilibrium codes for mirror machines
International Nuclear Information System (INIS)
Kaiser, T.B.
1983-01-01
The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)
Numerical Verification Of Equilibrium Chemistry
International Nuclear Information System (INIS)
Piro, Markus; Lewis, Brent; Thompson, William T.; Simunovic, Srdjan; Besmann, Theodore M.
2010-01-01
A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing boundary conditions in heat and mass transport modules. However, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes.
[Clinical analysis of 12 cases of orthognathic surgery with digital computer-assisted technique].
Tan, Xin-ying; Hu, Min; Liu, Chang-kui; Liu, Hua-wei; Liu, San-xia; Tao, Ye
2014-06-01
This study was to investigate the effect of the digital computer-assisted technique in orthognathic surgery. Twelve patients from January 2008 to December 2011 with jaw malformation were treated in our department. With the help of CT and three-dimensional reconstruction technique, 12 patients underwent surgical treatment and the results were evaluated after surgery. Digital computer-assisted technique could clearly show the status of the jaw deformity and assist virtual surgery. After surgery all patients were satisfied with the results. Digital orthognathic surgery can improve the predictability of the surgical procedure, and to facilitate patients' communication, shorten operative time, and reduce patients' pain.
International Nuclear Information System (INIS)
El-Osery, I.A.
1981-01-01
The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
de Oliveira, Mário J
2017-01-01
This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions. These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This new edit...
Computer applications in thermochemistry
International Nuclear Information System (INIS)
Vana Varamban, S.
1996-01-01
Knowledge of equilibrium is needed under many practical situations. Simple stoichiometric calculations can be performed by the use of hand calculators. Multi-component, multi-phase gas - solid chemical equilibrium calculations are far beyond the conventional devices and methods. Iterative techniques have to be resorted. Such problems are most elegantly handled by the use of modern computers. This report demonstrates the possible use of computers for chemical equilibrium calculations in the field of thermochemistry and chemical metallurgy. Four modules are explained. To fit the experimental C p data and to generate the thermal functions, to perform equilibrium calculations to the defined conditions, to prepare the elaborate input to the equilibrium and to analyse the calculated results graphically. The principles of thermochemical calculations are briefly described. An extensive input guide is given. Several illustrations are included to help the understanding and usage. (author)
Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro
2012-06-01
ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the
Neyton, Lionel; Barth, Johannes; Nourissat, Geoffroy; Métais, Pierre; Boileau, Pascal; Walch, Gilles; Lafosse, Laurent
2018-05-19
To analyze graft and fixation (screw and EndoButton) positioning after the arthroscopic Latarjet technique with 2-dimensional computed tomography (CT) and to compare it with the open technique. We performed a retrospective multicenter study (March 2013 to June 2014). The inclusion criteria included patients with recurrent anterior instability treated with the Latarjet procedure. The exclusion criterion was the absence of a postoperative CT scan. The positions of the hardware, the positions of the grafts in the axial and sagittal planes, and the dispersion of values (variability) were compared. The study included 208 patients (79 treated with open technique, 87 treated with arthroscopic Latarjet technique with screw fixation [arthro-screw], and 42 treated with arthroscopic Latarjet technique with EndoButton fixation [arthro-EndoButton]). The angulation of the screws was different in the open group versus the arthro-screw group (superior, 10.3° ± 0.7° vs 16.9° ± 1.0° [P open inferior screws (P = .003). In the axial plane (level of equator), the arthroscopic techniques resulted in lateral positions (arthro-screw, 1.5 ± 0.3 mm lateral [P open technique (0.9 ± 0.2 mm medial). At the level of 25% of the glenoid height, the arthroscopic techniques resulted in lateral positions (arthro-screw, 0.3 ± 0.3 mm lateral [P open technique (1.0 ± 0.2 mm medial). Higher variability was observed in the arthro-screw group. In the sagittal plane, the arthro-screw technique resulted in higher positions (55% ± 3% of graft below equator) and the arthro-EndoButton technique resulted in lower positions (82% ± 3%, P open technique (71% ± 2%). Variability was not different. This study shows that the position of the fixation devices and position of the bone graft with the arthroscopic techniques are statistically significantly different from those with the open technique with 2-dimensional CT assessment. In the sagittal plane, the arthro-screw technique provides the highest
Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.
Hu, Yujing; Gao, Yang; An, Bo
2015-07-01
An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.
Smith, Richard D; Keogh-Brown, Marcus R
2013-11-01
Previous research has demonstrated the value of macroeconomic analysis of the impact of influenza pandemics. However, previous modelling applications focus on high-income countries and there is a lack of evidence concerning the potential impact of an influenza pandemic on lower- and middle-income countries. To estimate the macroeconomic impact of pandemic influenza in Thailand, South Africa and Uganda with particular reference to pandemic (H1N1) 2009. A single-country whole-economy computable general equilibrium (CGE) model was set up for each of the three countries in question and used to estimate the economic impact of declines in labour attributable to morbidity, mortality and school closure. Overall GDP impacts were less than 1% of GDP for all countries and scenarios. Uganda's losses were proportionally larger than those of Thailand and South Africa. Labour-intensive sectors suffer the largest losses. The economic cost of unavoidable absence in the event of an influenza pandemic could be proportionally larger for low-income countries. The cost of mild pandemics, such as pandemic (H1N1) 2009, appears to be small, but could increase for more severe pandemics and/or pandemics with greater behavioural change and avoidable absence. © 2013 John Wiley & Sons Ltd.
Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.
Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V
2016-05-26
The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.
Plant process computer replacements - techniques to limit installation schedules and costs
International Nuclear Information System (INIS)
Baker, M.D.; Olson, J.L.
1992-01-01
Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested
Non-equilibrium phase transitions
Henkel, Malte; Lübeck, Sven
2009-01-01
This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.
Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur
2014-12-01
This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Computer Aided Measurement Laser (CAML): technique to quantify post-mastectomy lymphoedema
International Nuclear Information System (INIS)
Trombetta, Chiara; Abundo, Paolo; Felici, Antonella; Ljoka, Concetta; Foti, Calogero; Cori, Sandro Di; Rosato, Nicola
2012-01-01
Lymphoedema can be a side effect of cancer treatment. Eventhough several methods for assessing lymphoedema are used in clinical practice, an objective quantification of lymphoedema has been problematic. The aim of the study was to determine the objectivity, reliability and repeatability of the computer aided measurement laser (CAML) technique. CAML technique is based on computer aided design (CAD) methods and requires an infrared laser scanner. Measurements are scanned and the information describing size and shape of the limb allows to design the model by using the CAD software. The objectivity and repeatability was established in the beginning using a phantom. Consequently a group of subjects presenting post-breast cancer lymphoedema was evaluated using as a control the contralateral limb. Results confirmed that in clinical settings CAML technique is easy to perform, rapid and provides meaningful data for assessing lymphoedema. Future research will include a comparison of upper limb CAML technique between healthy subjects and patients with known lymphoedema.
Wang, Jianxiong
2014-06-01
This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF
16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)
Lokajicek, M; Tumova, N
2015-01-01
16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...
Fiala, L.; Lokajicek, M.; Tumova, N.
2015-05-01
This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program
CSIR Research Space (South Africa)
Phasha, MJ
2008-11-01
Full Text Available of view. Therefore, the only possible route so far to achieve alloying of Ti and Mg, is by employing a non-equilibrium process. As a result, many attempts to extend the solid solubility have been made in the past decade using non-equilibruim processes....
Gated equilibrium bloodpool scintigraphy
International Nuclear Information System (INIS)
Reinders Folmer, S.C.C.
1981-01-01
This thesis deals with the clinical applications of gated equilibrium bloodpool scintigraphy, performed with either a gamma camera or a portable detector system, the nuclear stethoscope. The main goal has been to define the value and limitations of noninvasive measurements of left ventricular ejection fraction as a parameter of cardiac performance in various disease states, both for diagnostic purposes as well as during follow-up after medical or surgical intervention. Secondly, it was attempted to extend the use of the equilibrium bloodpool techniques beyond the calculation of ejection fraction alone by considering the feasibility to determine ventricular volumes and by including the possibility of quantifying valvular regurgitation. In both cases, it has been tried to broaden the perspective of the observations by comparing them with results of other, invasive and non-invasive, procedures, in particular cardiac catheterization, M-mode echocardiography and myocardial perfusion scintigraphy. (Auth.)
Problems in equilibrium theory
Aliprantis, Charalambos D
1996-01-01
In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.
A new technique for on-line and off-line high speed computation
International Nuclear Information System (INIS)
Hartouni, E.P.; Jensen, D.A.; Klima, B.; Kreisler, M.N.; Rabin, M.S.Z.; Uribe, J.; Gottschalk, E.; Gara, A.; Knapp, B.C.
1989-01-01
A new technique for both on-line and off-line computation has been developed. With this technique, a reconstruction analysis in Elementary Particle Physics, otherwise prohibitively long, has been accomplished. It will be used on-line in an upcoming Fermilab experiment to reconstruct more than 100,000 events per second and to trigger on the basis of that information. The technique delivers 40 Giga operations per second, has a bandwidth on the order of Gigabytes per second and has a modest cost. An overview of the program, details of the system, and performance measurements are presented in this paper
Analysis of equilibrium and topology of tokamak plasmas
International Nuclear Information System (INIS)
Milligen, B.P. van.
1991-01-01
In a tokamak, the plasma is confined by means of a magnetic field. There exists an equilibrium between outward forces due to the pressure gradient in plasma and inward forces due to the interaction between currents flowing inside the plasma and the magnetic field. The equilibrium magnetic field is characterized by helical field lines that lie on nested toroidal surfaces of constant flux. The equilibrium yields values for global and local plasma parameters (e.g. plasma position, total current, local pressure). Thus, precise knowledge of the equilibrium is essential for plasma control, for the understanding of many phenomena occurring in the plasma (in particular departures from the ideal equilibrium involving current filamentation on the flux surfaces that lead to the formation of islands, i.e. nested helical flux surfaces), and for the interpretation of many different types of measurements (e.g. the translation of line integrated electron density measurements made by laser beams probing the plasma into a local electron density on a flux surface). The problem of determining the equilibrium magnetic field from external magnetic field measurements has been studied extensively in literature. The problem is 'ill-posed', which means that the solution is unstable to small changes in the measurement data, and the solution has to be constrained in order to stabilize it. Various techniques for handling this problem have been suggested in literature. Usually ad-hoc restrictions are imposed on the equilibrium solution in order to stabilize it. More equilibrium solvers are not able to handle very dissimilar measurement data which means information on the equilibrium is lost. The generally do not allow a straightforward error estimate of the obtained results to be made, and they require large amounts of computing time. This problems are addressed in this thesis. (author). 104 refs.; 42 figs.; 6 tabs
Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes
Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry
2007-04-01
This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.
Prediction of scour caused by 2D horizontal jets using soft computing techniques
Directory of Open Access Journals (Sweden)
Masoud Karbasi
2017-12-01
Full Text Available This paper presents application of five soft-computing techniques, artificial neural networks, support vector regression, gene expression programming, grouping method of data handling (GMDH neural network and adaptive-network-based fuzzy inference system, to predict maximum scour hole depth downstream of a sluice gate. The input parameters affecting the scour depth are the sediment size and its gradation, apron length, sluice gate opening, jet Froude number and the tail water depth. Six non-dimensional parameters were achieved to define a functional relationship between the input and output variables. Published data were used from the experimental researches. The results of soft-computing techniques were compared with empirical and regression based equations. The results obtained from the soft-computing techniques are superior to those of empirical and regression based equations. Comparison of soft-computing techniques showed that accuracy of the ANN model is higher than other models (RMSE = 0.869. A new GEP based equation was proposed.
International Nuclear Information System (INIS)
Gerischer, R.
1987-01-01
The described technique for three-dimensional image reconstruction from ECT sections is based on a simple procedure, which can be carried out with the aid of any standard-type computer used in nuclear medicine and requires no sophisticated arithmetic approach. (TRV) [de
International Nuclear Information System (INIS)
Krakowski, R. A.
2006-06-01
Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)
Energy Technology Data Exchange (ETDEWEB)
Krakowski, R. A
2006-06-15
Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)
Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
International Nuclear Information System (INIS)
2016-01-01
Preface The 2016 version of the International Workshop on Advanced Computing and Analysis Techniques in Physics Research took place on January 18-22, 2016, at the Universidad Técnica Federico Santa Maria -UTFSM- in Valparaiso, Chile. The present volume of IOP Conference Series is devoted to the selected scientific contributions presented at the workshop. In order to guarantee the scientific quality of the Proceedings all papers were thoroughly peer-reviewed by an ad-hoc Editorial Committee with the help of many careful reviewers. The ACAT Workshop series has a long tradition starting in 1990 (Lyon, France), and takes place in intervals of a year and a half. Formerly these workshops were known under the name AIHENP (Artificial Intelligence for High Energy and Nuclear Physics). Each edition brings together experimental and theoretical physicists and computer scientists/experts, from particle and nuclear physics, astronomy and astrophysics in order to exchange knowledge and experience in computing and data analysis in physics. Three tracks cover the main topics: Computing technology: languages and system architectures. Data analysis: algorithms and tools. Theoretical Physics: techniques and methods. Although most contributions and discussions are related to particle physics and computing, other fields like condensed matter physics, earth physics, biophysics are often addressed in the hope to share our approaches and visions. It created a forum for exchanging ideas among fields, exploring and promoting cutting-edge computing technologies and debating hot topics. (paper)
Andreiuolo, Rafael Ferrone; Sabrosa, Carlos Eduardo; Dias, Katia Regina H Cervantes
2013-09-01
The use of bi-layered all-ceramic crowns has continuously grown since the introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia cores. Unfortunately, despite the outstanding mechanical properties of zirconia, problems related to porcelain cracking or chipping remain. One of the reasons for this is that ceramic copings are usually milled to uniform thicknesses of 0.3-0.6 mm around the whole tooth preparation. This may not provide uniform thickness or appropriate support for the veneering porcelain. To prevent these problems, the dual-scan technique demonstrates an alternative that allows the restorative team to customize zirconia CAD/CAM frameworks with adequate porcelain thickness and support in a simple manner.
DEFF Research Database (Denmark)
NJOMO WANDJI, Wilfried
2017-01-01
levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...
Ebersole, M. M.; Lecoq, P. E.
1968-01-01
This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.
Iba, Hitoshi
2012-01-01
“Practical Applications of Evolutionary Computation to Financial Engineering” presents the state of the art techniques in Financial Engineering using recent results in Machine Learning and Evolutionary Computation. This book bridges the gap between academics in computer science and traders and explains the basic ideas of the proposed systems and the financial problems in ways that can be understood by readers without previous knowledge on either of the fields. To cement the ideas discussed in the book, software packages are offered that implement the systems described within. The book is structured so that each chapter can be read independently from the others. Chapters 1 and 2 describe evolutionary computation. The third chapter is an introduction to financial engineering problems for readers who are unfamiliar with this area. The following chapters each deal, in turn, with a different problem in the financial engineering field describing each problem in detail and focusing on solutions based on evolutio...
Directory of Open Access Journals (Sweden)
seyyed mohammad zargar
2018-03-01
Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.
Uchida, Masafumi
2014-04-01
A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils
Directory of Open Access Journals (Sweden)
Fatimah Khaleel Ibrahim
2017-08-01
Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang
2011-01-01
To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.
Unified commutation-pruning technique for efficient computation of composite DFTs
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with
International Nuclear Information System (INIS)
Siti Nur Syatirah Ismail
2012-01-01
The study was conducted to make comparison of digital image quality of DEF-10 from the techniques of simulation and computed radiography (CR). The sample used is steel DEF-10 with thickness of 15.28 mm. In this study, the sample is exposed to radiation from X-ray machine (ISOVOLT Titan E) with certain parameters. The parameters used in this study such as current, volt, exposure time and distance are specified. The current and distance of 3 mA and 700 mm respectively are specified while the applied voltage varies at 140, 160, 180 and 200 kV. The exposure time is reduced at a rate of 0, 20, 40, 60 and 80 % for each sample exposure. Digital image of simulation produced from aRTist software whereas digital image of computed radiography produced from imaging plate. Therefore, both images were compared qualitatively (sensitivity) and quantitatively (Signal to-Noise Ratio; SNR, Basic Spatial Resolution; SRb and LOP size) using Isee software. Radiographic sensitivity is indicated by Image Quality Indicator (IQI) which is the ability of the CR system and aRTist software to identify IQI of wire type when the time exposure is reduced up to 80% according to exposure chart ( D7; ISOVOLT Titan E). The image of the thinnest wire diameter achieved by radiograph from simulation and CR are the wire numbered 7 rather than the wire numbered 8 required by the standard. In quantitative comparison, this study shows that the SNR values decreases with reducing exposure time. SRb values increases for simulation and decreases for CR when the exposure time decreases and the good image quality can be achieved at 80% reduced exposure time. The high SNR and SRb values produced good image quality in CR and simulation techniques respectively. (author)
Longo, F; Nicetto, T; Banzato, T; Savio, G; Drigo, M; Meneghello, R; Concheri, G; Isola, M
2018-02-01
The aim of this ex vivo study was to test a novel three-dimensional (3D) automated computer-aided design (CAD) method (aCAD) for the computation of femoral angles in dogs from 3D reconstructions of computed tomography (CT) images. The repeatability and reproducibility of three manual radiography, manual CT reconstructions and the aCAD method for the measurement of three femoral angles were evaluated: (1) anatomical lateral distal femoral angle (aLDFA); (2) femoral neck angle (FNA); and (3) femoral torsion angle (FTA). Femoral angles of 22 femurs obtained from 16 cadavers were measured by three blinded observers. Measurements were repeated three times by each observer for each diagnostic technique. Femoral angle measurements were analysed using a mixed effects linear model for repeated measures to determine the levels of intra-observer agreement (repeatability) and inter-observer agreement (reproducibility). Repeatability and reproducibility of measurements using the aCAD method were excellent (intra-class coefficients, ICCs≥0.98) for all three angles assessed. Manual radiography and CT exhibited excellent agreement for the aLDFA measurement (ICCs≥0.90). However, FNA repeatability and reproducibility were poor (ICCscomputation of the 3D aCAD method provided the highest repeatability and reproducibility among the tested methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Head and neck computed tomography virtual endoscopy: evaluation of a new imaging technique.
Gallivan, R P; Nguyen, T H; Armstrong, W B
1999-10-01
To evaluate a new radiographic imaging technique: computed tomography virtual endoscopy (CTVE) for head and neck tumors. Twenty-one patients presenting with head and neck masses who underwent axial computed tomography (CT) scan with contrast were evaluated by CTVE. Comparisons were made with video-recorded images and operative records to evaluate the potential utility of this new imaging technique. Twenty-one patients with aerodigestive head and neck tumors were evaluated by CTVE. One patient had a nasal cylindrical cell papilloma; the remainder, squamous cell carcinomas distributed throughout the upper aerodigestive tract. Patients underwent complete head and neck examination, flexible laryngoscopy, axial CT with contrast, CTVE, and in most cases, operative endoscopy. Available clinical and radiographic evaluations were compared and correlated to CTVE findings. CTVE accurately demonstrated abnormalities caused by intraluminal tumor, but where there was apposition of normal tissue against tumor, inaccurate depictions of surface contour occurred. Contour resolution was limited, and mucosal irregularity could not be defined. There was very good overall correlation between virtual images, flexible laryngoscopic findings, rigid endoscopy, and operative evaluation in cases where oncological resections were performed. CTVE appears to be most accurate in evaluation of subglottic and nasopharyngeal anatomy in our series of patients. CTVE is a new radiographic technique that provides surface-contour details. The technique is undergoing rapid technical evolution, and although the image quality is limited in situations where there is apposition of tissue folds, there are a number of potential applications for this new imaging technique.
Techniques and environments for big data analysis parallel, cloud, and grid computing
Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name
2016-01-01
This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.
Artifact Elimination Technique in Tomogram of X-ray Computed Tomography
International Nuclear Information System (INIS)
Rasif Mohd Zain
2015-01-01
Artifacts of tomogram are main commonly problems occurred in x-ray computed tomography. The artifacts will be appearing in tomogram due to noise, beam hardening, and scattered radiation. The study has been carried out using CdTe time pix detector. The new technique has been developed to eliminate the artifact occurred in hardware and software. The hardware setup involved the careful alignment all of the components of the system and the introduction of a collimator beam. Meanwhile, in software development deal with the flat field correction, noise filter and data projection algorithm. The results show the technique developed produce good quality images and eliminate the artifacts. (author)
Srisamran, Supree
This dissertation examines the potential impacts of three electricity policies on the economy of Thailand in terms of macroeconomic performance, income distribution, and unemployment rate. The three considered policies feature responses to potential disruption of imported natural gas used in electricity generation, alternative combinations (portfolios) of fuel feedstock for electricity generation, and increases in investment and local electricity consumption. The evaluation employs Computable General Equilibrium (CGE) approach with the extension of electricity generation and transmission module to simulate the counterfactual scenario for each policy. The dissertation consists of five chapters. Chapter one begins with a discussion of Thailand's economic condition and is followed by a discussion of the current state of electricity generation and consumption and current issues in power generation. The security of imported natural gas in power generation is then briefly discussed. The persistence of imported natural gas disruption has always caused trouble to the country, however, the economic consequences of this disruption have not yet been evaluated. The current portfolio of power generation and the concerns it raises are then presented. The current portfolio of power generation is heavily reliant upon natural gas and so needs to be diversified. Lastly, the anticipated increase in investment and electricity consumption as a consequence of regional integration is discussed. Chapter two introduces the CGE model, its background and limitations. Chapter three reviews relevant literature of the CGE method and its application in electricity policies. In addition, the submodule characterizing the network of electricity generation and distribution and the method of its integration with the CGE model are explained. Chapter four presents the findings of the policy simulations. The first simulation illustrates the consequences of responses to disruptions in natural gas imports
Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models
Parke, F. I.
1981-01-01
Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.
Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids
DEFF Research Database (Denmark)
Creixell Mediante, Ester
. In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...
Marwala, Tshilidzi
2010-01-01
Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...
Advanced technique for computing fuel combustion properties in pulverized-fuel fired boilers
Energy Technology Data Exchange (ETDEWEB)
Kotler, V.R. (Vsesoyuznyi Teplotekhnicheskii Institut (Russian Federation))
1992-03-01
Reviews foreign technical reports on advanced techniques for computing fuel combustion properties in pulverized-fuel fired boilers and analyzes a technique developed by Combustion Engineering, Inc. (USA). Characteristics of 25 fuel types, including 19 grades of coal, are listed along with a diagram of an installation with a drop tube furnace. Characteristics include burn-out intensity curves obtained using thermogravimetric analysis for high-volatile bituminous, semi-bituminous and coking coal. The patented LFP-SKM mathematical model is used to model combustion of a particular fuel under given conditions. The model allows for fuel particle size, air surplus, load, flame height, and portion of air supplied as tertiary blast. Good agreement between computational and experimental data was observed. The method is employed in designing new boilers as well as converting operating boilers to alternative types of fuel. 3 refs.
Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques
Directory of Open Access Journals (Sweden)
O. P. Bharti
2017-06-01
Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.
Integration of computational modeling and experimental techniques to design fuel surrogates
DEFF Research Database (Denmark)
Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree
2017-01-01
performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...
Computer vision techniques applied to the quality control of ceramic plates
Silveira, Joaquim; Ferreira, Manuel João Oliveira; Santos, Cristina; Martins, Teresa
2009-01-01
This paper presents a system, based on computer vision techniques, that detects and quantifies different types of defects in ceramic plates. It was developed in collaboration with the industrial ceramic sector and consequently it was focused on the defects that are considered more quality depreciating by the Portuguese industry. They are of three main types: cracks; granules and relief surface. For each type the development was specific as far as image processing techn...
Sedlár, Drahomír; Potomková, Jarmila; Rehorová, Jarmila; Seckár, Pavel; Sukopová, Vera
2003-11-01
Information explosion and globalization make great demands on keeping pace with the new trends in the healthcare sector. The contemporary level of computer and information literacy among most health care professionals in the Teaching Hospital Olomouc (Czech Republic) is not satisfactory for efficient exploitation of modern information technology in diagnostics, therapy and nursing. The present contribution describes the application of two basic problem solving techniques (brainstorming, SWOT analysis) to develop a project aimed at information literacy enhancement.
Auditors’ Usage of Computer Assisted Audit Tools and Techniques: Empirical Evidence from Nigeria
Appah Ebimobowei; G.N. Ogbonna; Zuokemefa P. Enebraye
2013-01-01
This study examines use of computer assisted audit tool and techniques in audit practice in the Niger Delta of Nigeria. To achieve this objective, data was collected from primary and secondary sources. The secondary sources were from scholarly books and journals while the primary source involved a well structured questionnaire of three sections of thirty seven items with an average reliability of 0.838. The data collected from the questionnaire were analyzed using relevant descriptive statist...
Assessment of traffic noise levels in urban areas using different soft computing techniques.
Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D
2016-10-01
Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.
A Simple Technique for Securing Data at Rest Stored in a Computing Cloud
Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai
"Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.
Technique and results of the spinal computed tomography in the diagnosis of cervical disc disease
International Nuclear Information System (INIS)
Artmann, H.; Salbeck, R.; Grau, H.
1985-01-01
We give a description of a technique of the patient's positioning with traction of the arms during the cervical spinal computed tomography which allows to draw the shoulders downwards by about one to three cervical segments. By this method the quality of the images can be improved in 96% in the cervical segment 6/7 and in 81% in the cervical/thoracal segment 7/1 to such a degree that a reliable judgement of the soft parts in the spinal canal becomes possible. The diagnostic reliability of the computed tomography of the cervical disc herniation is thus improved so that the necessity of a myelography is decreasing. The results of 396 cervical spinal computed tomographies are presented. (orig.) [de
Energy Technology Data Exchange (ETDEWEB)
Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu
2015-12-31
Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.
Research on integrated simulation of fluid-structure system by computation science techniques
International Nuclear Information System (INIS)
Yamaguchi, Akira
1996-01-01
In Power Reactor and Nuclear Fuel Development Corporation, the research on the integrated simulation of fluid-structure system by computation science techniques has been carried out, and by its achievement, the verification of plant systems which has depended on large scale experiments is substituted by computation science techniques, in this way, it has been aimed at to reduce development costs and to attain the optimization of FBR systems. For the purpose, it is necessary to establish the technology for integrally and accurately analyzing complicated phenomena (simulation technology), the technology for applying it to large scale problems (speed increasing technology), and the technology for assuring the reliability of the results of analysis when simulation technology is utilized for the permission and approval of FBRs (verifying technology). The simulation of fluid-structure interaction, the heat flow simulation in the space with complicated form and the related technologies are explained. As the utilization of computation science techniques, the elucidation of phenomena by numerical experiment and the numerical simulation as the substitute for tests are discussed. (K.I.)
International Nuclear Information System (INIS)
Armato, Samuel G. III; Oxnard, Geoffrey R.; MacMahon, Heber; Vogelzang, Nicholas J.; Kindler, Hedy L.; Kocherginsky, Masha; Starkey, Adam
2004-01-01
Our purpose in this study was to evaluate the variability of manual mesothelioma tumor thickness measurements in computed tomography (CT) scans and to assess the relative performance of six computerized measurement algorithms. The CT scans of 22 patients with malignant pleural mesothelioma were collected. In each scan, an initial observer identified up to three sites in each of three CT sections at which tumor thickness measurements were to be made. At each site, five observers manually measured tumor thickness through a computer interface. Three observers repeated these measurements during three separate sessions. Inter- and intra-observer variability in the manual measurement of tumor thickness was assessed. Six automated measurement algorithms were developed based on the geometric relationship between a specified measurement site and the automatically extracted lung regions. Computer-generated measurements were compared with manual measurements. The tumor thickness measurements of different observers were highly correlated (r≥0.99); however, the 95% limits of agreement for relative inter-observer difference spanned a range of 30%. Tumor thickness measurements generated by the computer algorithms also correlated highly with the average of observer measurements (r≥0.93). We have developed computerized techniques for the measurement of mesothelioma tumor thickness in CT scans. These techniques achieved varying levels of agreement with measurements made by human observers
International Nuclear Information System (INIS)
Zhang, Hao; Tan, Qiaofeng; Jin, Guofan
2013-01-01
Holographic display is capable of reconstructing the whole optical wave field of a three-dimensional (3D) scene. It is the only one among all the 3D display techniques that can produce all the depth cues. With the development of computing technology and spatial light modulators, computer generated holograms (CGHs) can now be used to produce dynamic 3D images of synthetic objects. Computation holography becomes highly complicated and demanding when it is employed to produce real 3D images. Here we present a novel algorithm for generating a full parallax 3D CGH with occlusion effect, which is an important property of 3D perception, but has often been neglected in fully computed hologram synthesis. The ray casting technique, which is widely used in computer graphics, is introduced to handle the occlusion issue of CGH computation. Horizontally and vertically distributed rays are projected from each hologram sample to the 3D objects to obtain the complex amplitude distribution. The occlusion issue is handled by performing ray casting calculations to all the hologram samples. The proposed algorithm has no restriction on or approximation to the 3D objects, and hence it can produce reconstructed images with correct shading effect and no visible artifacts. Programmable graphics processing unit (GPU) is used to perform parallel calculation. This is made possible because each hologram sample belongs to an independent operation. To demonstrate the performance of our proposed algorithm, an optical experiment is performed to reconstruct the 3D scene by using a phase-only spatial light modulator. We can easily perceive the accommodation cue by focusing our eyes on different depths of the scene and the motion parallax cue with occlusion effect by moving our eyes around. The experiment result confirms that the CGHs produced by our algorithm can successfully reconstruct 3D images with all the depth cues.
Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique
Directory of Open Access Journals (Sweden)
Nur Azzammudin Rahmat
2016-06-01
Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.
Now And Next Generation Sequencing Techniques: Future of Sequence Analysis using Cloud Computing
Directory of Open Access Journals (Sweden)
Radhe Shyam Thakur
2012-12-01
Full Text Available Advancements in the field of sequencing techniques resulted in the huge sequenced data to be produced at a very faster rate. It is going cumbersome for the datacenter to maintain the databases. Data mining and sequence analysis approaches needs to analyze the databases several times to reach any efficient conclusion. To cope with such overburden on computer resources and to reach efficient and effective conclusions quickly, the virtualization of the resources and computation on pay as you go concept was introduced and termed as cloud computing. The datacenter’s hardware and software is collectively known as cloud which when available publicly is termed as public cloud. The datacenter’s resources are provided in a virtual mode to the clients via a service provider like Amazon, Google and Joyent which charges on pay as you go manner. The workload is shifted to the provider which is maintained by the required hardware and software upgradation. The service provider manages it by upgrading the requirements in the virtual mode. Basically a virtual environment is created according to the need of the user by taking permission from datacenter via internet, the task is performed and the environment is deleted after the task is over. In this discussion, we are focusing on the basics of cloud computing, the prerequisites and overall working of clouds. Furthermore, briefly the applications of cloud computing in biological systems, especially in comparative genomics, genome informatics and SNP detection with reference to traditional workflow are discussed.
Acceleration of FDTD mode solver by high-performance computing techniques.
Han, Lin; Xi, Yanping; Huang, Wei-Ping
2010-06-21
A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.
Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques
Energy Technology Data Exchange (ETDEWEB)
Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)
2001-05-01
Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)
Directory of Open Access Journals (Sweden)
Jesús Botero Garcia
2011-10-01
Full Text Available Mediante un modelo de equilibrio general computable, calibrado para Colombia, se analiza el impacto de diversas políticas económicas, que afectan el precio relativo de los factores productivos. Se concluye que los estímulos a la inversión, que pueden interpretarse como acciones que disminuyen el precio del capital, propician sin embargo la acumulación de capital, y por esa vía, incrementan la productividad del trabajo, generando efectos positivos netos sobre el empleo. La eliminación de los aportes parafiscales, por su parte, genera una reducción en el costo del trabajo, pero su efecto global sobre el empleo es compensado parcialmente por las acciones fiscales tendientes a generar rentas alternativas que permitan mantener los beneficios asociados a esos aportes. Se sugiere que el esquema ideal sería aquel que establece estímulos a la inversión, focalizados hacia sectores intensivos en empleo, al tiempo que crea redes de protección social adecuadas, para enfrentar los problemas asociados a la pobreza. Abstract Using a computable general equilibrium model, calibrated for Colombia, it is analyze the impact of various economic policies, which affect the relative price of production factors. The results concluded that the incentives for investment, which can be interpreted as actions that decrease the cost of capital, however lead to the accumulation of capital, and thereby increase the productivity of labour, generating net positive effects on employment. The Elimination of the payroll taxes, for its part, generates a reduction in the cost of labour, but their overall effect on employment is partially offset by the tax measures designed to generate alternative income to keep the benefits associated with these contributions. Finally the suggestion is that the ideal scheme would be one that provides incentives for investment, focused towards employment-intensive sectors, at the time that creates networks of social protection appropriate
Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K
2011-04-01
Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual
Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio
2010-01-01
In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.
A neuro-fuzzy computing technique for modeling hydrological time series
Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.
2004-05-01
Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.
Directory of Open Access Journals (Sweden)
Murat Yenisey
2009-10-01
Full Text Available OBJECTIVE: The objective of this study was to compare the pain levels on opposite sides of the maxilla at needle insertion during delivery of local anesthetic solution and tooth preparation for both conventional and anterior middle superior alveolar (AMSA technique with the Wand computer-controlled local anesthesia application. MATERIAL AND METHODS: Pain scores of 16 patients were evaluated with a 5-point verbal rating scale (VRS and data were analyzed nonparametrically. Pain differences at needle insertion, during delivery of local anesthetic, and at tooth preparation, for conventional versus the Wand technique, were analyzed using the Mann-Whitney U test (p=0.01. RESULTS: The Wand technique had a lower pain level compared to conventional injection for needle insertion (p0.05. CONCLUSIONS: The AMSA technique using the Wand is recommended for prosthodontic treatment because it reduces pain during needle insertion and during delivery of local anaesthetic. However, these two techniques have the same pain levels for tooth preparation.
Micro-computed tomography and bond strength analysis of different root canal filling techniques
Directory of Open Access Journals (Sweden)
Juliane Nhata
2014-01-01
Full Text Available Introduction: The aim of this study was to evaluate the quality and bond strength of three root filling techniques (lateral compaction, continuous wave of condensation and Tagger′s Hybrid technique [THT] using micro-computed tomography (CT images and push-out tests, respectively. Materials and Methods: Thirty mandibular incisors were prepared using the same protocol and randomly divided into three groups (n = 10: Lateral condensation technique (LCT, continuous wave of condensation technique (CWCT, and THT. All specimens were filled with Gutta-percha (GP cones and AH Plus sealer. Five specimens of each group were randomly chosen for micro-CT analysis and all of them were sectioned into 1 mm slices and subjected to push-out tests. Results: Micro-CT analysis revealed less empty spaces when GP was heated within the root canals in CWCT and THT when compared to LCT. Push-out tests showed that LCT and THT had a significantly higher displacement resistance (P < 0.05 when compared to the CWCT. Bond strength was lower in apical and middle thirds than in the coronal thirds. Conclusions: It can be concluded that LCT and THT were associated with higher bond strengths to intraradicular dentine than CWCT. However, LCT was associated with more empty voids than the other techniques.
Directory of Open Access Journals (Sweden)
Dirk Wagenaar
Full Text Available Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT. Recently, vendors have started offering metal artifact reduction (MAR techniques. In addition, a MAR technique called the metal deletion technique (MDT is freely available and able to reduce metal artifacts using reconstructed images. Although a comparison of the MDT to other MAR techniques exists, a comparison of commercially available MAR techniques is lacking. The aim of this study was therefore to quantify the difference in effectiveness of the currently available MAR techniques of different scanners and the MDT technique.Three vendors were asked to use their preferential CT scanner for applying their MAR techniques. The scans were performed on a Philips Brilliance ICT 256 (S1, a GE Discovery CT 750 HD (S2 and a Siemens Somatom Definition AS Open (S3. The scans were made using an anthropomorphic head and neck phantom (Kyoto Kagaku, Japan. Three amalgam dental implants were constructed and inserted between the phantom's teeth. The average absolute error (AAE was calculated for all reconstructions in the proximity of the amalgam implants.The commercial techniques reduced the AAE by 22.0±1.6%, 16.2±2.6% and 3.3±0.7% for S1 to S3 respectively. After applying the MDT to uncorrected scans of each scanner the AAE was reduced by 26.1±2.3%, 27.9±1.0% and 28.8±0.5% respectively. The difference in efficiency between the commercial techniques and the MDT was statistically significant for S2 (p=0.004 and S3 (p<0.001, but not for S1 (p=0.63.The effectiveness of MAR differs between vendors. S1 performed slightly better than S2 and both performed better than S3. Furthermore, for our phantom and outcome measure the MDT was more effective than the commercial MAR technique on all scanners.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Data mining technique for a secure electronic payment transaction using MJk-RSA in mobile computing
G. V., Ramesh Babu; Narayana, G.; Sulaiman, A.; Padmavathamma, M.
2012-04-01
Due to the evolution of the Electronic Learning (E-Learning), one can easily get desired information on computer or mobile system connected through Internet. Currently E-Learning materials are easily accessible on the desktop computer system, but in future, most of the information shall also be available on small digital devices like Mobile, PDA, etc. Most of the E-Learning materials are paid and customer has to pay entire amount through credit/debit card system. Therefore, it is very important to study about the security of the credit/debit card numbers. The present paper is an attempt in this direction and a security technique is presented to secure the credit/debit card numbers supplied over the Internet to access the E-Learning materials or any kind of purchase through Internet. A well known method i.e. Data Cube Technique is used to design the security model of the credit/debit card system. The major objective of this paper is to design a practical electronic payment protocol which is the safest and most secured mode of transaction. This technique may reduce fake transactions which are above 20% at the global level.
Application of parallel computing techniques to a large-scale reservoir simulation
International Nuclear Information System (INIS)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten
2001-01-01
Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance
Weikl, Thomas R.; Hu, Jinglei; Xu, Guang-Kui; Lipowsky, Reinhard
2016-01-01
ABSTRACT The adhesion of cell membranes is mediated by the binding of membrane-anchored receptor and ligand proteins. In this article, we review recent results from simulations and theory that lead to novel insights on how the binding equilibrium and kinetics of these proteins is affected by the membranes and by the membrane anchoring and molecular properties of the proteins. Simulations and theory both indicate that the binding equilibrium constant K2D and the on- and off-rate constants of anchored receptors and ligands in their 2-dimensional (2D) membrane environment strongly depend on the membrane roughness from thermally excited shape fluctuations on nanoscales. Recent theory corroborated by simulations provides a general relation between K2D and the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in 3 dimensions (3D). PMID:27294442
Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.
Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B
2012-09-15
Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Directory of Open Access Journals (Sweden)
Jose M. Moya
2012-08-01
Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Computer aided production of manufacturing CAMAC-wired boards by the multiwire-technique
Energy Technology Data Exchange (ETDEWEB)
Martini, M; Brehmer, W
1975-10-01
The multiwire-technique is a computer controlled wiring method for the manufacturing of circuit boards with insulated conductors. The technical data for production are dimensional drawings of the board and a list of all points which are to be connected. The listing must be in absolute co-ordinates including a list of all soldering points for component parts and a reproducible print pattern for inscription. For this wiring method a CAMAC standard board, a layout plan with alpha-numeric symbols, and a computer program which produces the essential technical data were developed. A description of the alpha-numeric symbols, the quality of the program, recognition and checking of these symbols, and the produced technical data is presented. (auth)
Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.
Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M
2018-05-15
Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.
Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.
Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi
2017-02-01
To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
International Nuclear Information System (INIS)
Kaewlek, T.; Koolpiruck, D.; Thongvigitmanee, S.; Mongkolsuk, M.; Chiewvit, P.; Thammakittiphan, S.
2012-01-01
Metal artifacts are one of significant problems in computed tomography (CT). The streak lines and air gaps arise from metal implants of orthopedic patients, such as prosthesis, dental bucket, and pedicle screws that cause incorrect diagnosis and local treatment planning. A common technique to suppressed artifacts is by adjusting windows, but those artifacts still remain on the images. To improve the detail of spine CT images, the variable thresholding technique is proposed in this paper. Three medical cases of spine CT images categorized by the severity of artifacts (screws head, one full screw, and two full screws) were investigated. Metal regions were segmented by k-mean clustering, then transformed into a sinogram domain. The metal sinogram was identified by the variable thresholding method, and then replaced the new estimated values by linear interpolation. The modified sinogram was reconstructed by the filtered back- projection algorithm, and added the metal region back to the modified reconstructed image in order to reproduce the final image. The image quality of the proposed technique, the automatic thresholding (Kalender) technique, and window adjustment technique was compared in term of noise and signal to noise ratio (SNR). The propose method can reduce metal artifacts between pedicle screws. After processing by our proposed technique, noise in the modified images is reduced (screws head 121.15 to73.83, one full screw 160.88 to 94.04, and two full screws 199.73 to 110.05 from the initial image) and SNR is increased (screws head 0.87 to 1.88, one full screw 1.54 to 2.82, and two full screws 0.32 to 0.41 from the initial image). The variable thresholding technique can identify the suitable boundary for restoring the missing data. The efficiency of the metal artifacts reduction is indicated on the case of partial and full pedicle screws. Our technique can improve the detail of spine CT images better than automatic thresholding (Kalender) technique, and
McBride, Bonnie J.; Gordon, Sanford
1996-01-01
This users manual is the second part of a two-part report describing the NASA Lewis CEA (Chemical Equilibrium with Applications) program. The program obtains chemical equilibrium compositions of complex mixtures with applications to several types of problems. The topics presented in this manual are: (1) details for preparing input data sets; (2) a description of output tables for various types of problems; (3) the overall modular organization of the program with information on how to make modifications; (4) a description of the function of each subroutine; (5) error messages and their significance; and (6) a number of examples that illustrate various types of problems handled by CEA and that cover many of the options available in both input and output. Seven appendixes give information on the thermodynamic and thermal transport data used in CEA; some information on common variables used in or generated by the equilibrium module; and output tables for 14 example problems. The CEA program was written in ANSI standard FORTRAN 77. CEA should work on any system with sufficient storage. There are about 6300 lines in the source code, which uses about 225 kilobytes of memory. The compiled program takes about 975 kilobytes.
DEFF Research Database (Denmark)
Yagüe-Fabra, J.A.; Ontiveros, S.; Jiménez, R.
2013-01-01
Many factors influence the measurement uncertainty when using computed tomography for dimensional metrology applications. One of the most critical steps is the surface extraction phase. An incorrect determination of the surface may significantly increase the measurement uncertainty. This paper...... presents an edge detection method for the surface extraction based on a 3D Canny algorithm with sub-voxel resolution. The advantages of this method are shown in comparison with the most commonly used technique nowadays, i.e. the local threshold definition. Both methods are applied to reference standards...
DEFF Research Database (Denmark)
Cappellin, Cecilia; Breinbjerg, Olav; Frandsen, Aksel
2008-01-01
An effective technique for extracting the singularity of plane wave spectra in the computation of antenna aperture fields is proposed. The singular spectrum is first factorized into a product of a finite function and a singular function. The finite function is inverse Fourier transformed...... numerically using the Inverse Fast Fourier Transform, while the singular function is inverse Fourier transformed analytically, using the Weyl-identity, and the two resulting spatial functions are then convolved to produce the antenna aperture field. This article formulates the theory of the singularity...
A technique for integrating remote minicomputers into a general computer's file system
Russell, R D
1976-01-01
This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).
Energy Technology Data Exchange (ETDEWEB)
Gonzalez Portilla, M. I.; Marquez, J.
2011-07-01
Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.
Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations
DEFF Research Database (Denmark)
Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong
2015-01-01
Fluid-structure interaction simulations are routinely used in the wind energy industry to evaluate the aerodynamic and structural dynamic performance of wind turbines. Most aero-elastic codes in modern times implement a blade element momentum technique to model the rotor aerodynamics and a modal......, multi-body, or finite-element approach to model the turbine structural dynamics. The present paper describes a novel fluid-structure coupling technique which combines a threedimensional viscous-inviscid solver for horizontal-axis wind-turbine aerodynamics, called MIRAS, and the structural dynamics model...... used in the aero-elastic code FLEX5. The new code, MIRASFLEX, in general shows good agreement with the standard aero-elastic codes FLEX5 and FAST for various test cases. The structural model in MIRAS-FLEX acts to reduce the aerodynamic load computed by MIRAS, particularly near the tip and at high wind...
Directory of Open Access Journals (Sweden)
Mosbeh R. Kaloop
2017-01-01
Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.
CASAD -- Computer-Aided Sonography of Abdominal Diseases - the concept of joint technique impact
Directory of Open Access Journals (Sweden)
T. Deserno
2010-03-01
Full Text Available Ultrasound image is the primary (input information for every ultrasonic examination. Since being used in ultrasound images analysis the both knowledge-base decision support and content-based image retrieval techniques have their own restrictions, the combination of these techniques looks promissory for covering the restrictions of one by advances of another. In this work we have focused on implementation of the proposed combination in the frame of CASAD (Computer-Aided Sonography of Abdominal Diseases system for supplying the ultrasound examiner with a diagnostic-assistant tool based on a data warehouse of standard referenced images. This warehouse serves: to manifest the diagnosis when the ecographist specifies the pathology and then looks through corresponding images to verify his opinion; to suggest a second opinion by automatic analysis of the annotation of relevant images that were assessed from the repository using content-based image retrieval.
Multislice Spiral Computed Tomography of the Heart: Technique, Current Applications, and Perspective
International Nuclear Information System (INIS)
Mahnken, Andreas H.; Wildberger, Joachim E.; Koos, Ralf; Guenther, Rolf W.
2005-01-01
Multislice spiral computed tomography (MSCT) is a rapidly evolving, noninvasive technique for cardiac imaging. Knowledge of the principle of electrocardiogram-gated MSCT and its limitations in clinical routine are needed to optimize image quality. Therefore, the basic technical principle including essentials of image postprocessing is described. Cardiac MSCT imaging was initially focused on coronary calcium scoring, MSCT coronary angiography, and analysis of left ventricular function. Recent studies also evaluated the ability of cardiac MSCT to visualize myocardial infarction and assess valvular morphology. In combination with experimental approaches toward the assessment of aortic valve function and myocardial viability, cardiac MSCT holds the potential for a comprehensive examination of the heart using one single examination technique
A technique for transferring a patient's smile line to a cone beam computed tomography (CBCT) image.
Bidra, Avinash S
2014-08-01
Fixed implant-supported prosthodontic treatment for patients requiring a gingival prosthesis often demands that bone and implant levels be apical to the patient's maximum smile line. This is to avoid the display of the prosthesis-tissue junction (the junction between the gingival prosthesis and natural soft tissues) and prevent esthetic failures. Recording a patient's lip position during maximum smile is invaluable for the treatment planning process. This article presents a simple technique for clinically recording and transferring the patient's maximum smile line to cone beam computed tomography (CBCT) images for analysis. The technique can help clinicians accurately determine the need for and amount of bone reduction required with respect to the maximum smile line and place implants in optimal positions. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Equilibrium Arrival Times to Queues
DEFF Research Database (Denmark)
Breinbjerg, Jesper; Østerdal, Lars Peter
We consider a non-cooperative queueing environment where a finite number of customers independently choose when to arrive at a queueing system that opens at a given point in time and serves customers on a last-come first-serve preemptive-resume (LCFS-PR) basis. Each customer has a service time...... requirement which is identically and independently distributed according to some general probability distribution, and they want to complete service as early as possible while minimizing the time spent in the queue. In this setting, we establish the existence of an arrival time strategy that constitutes...... a symmetric (mixed) Nash equilibrium, and show that there is at most one symmetric equilibrium. We provide a numerical method to compute this equilibrium and demonstrate by a numerical example that the social effciency can be lower than the effciency induced by a similar queueing system that serves customers...
International Nuclear Information System (INIS)
Dall'agnol, Cristina; Barletta, Fernando Branco; Hartmann, Mateus Silveira Martins
2008-01-01
This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend (α=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)
Energy Technology Data Exchange (ETDEWEB)
Dall' agnol, Cristina; Barletta, Fernando Branco [Lutheran University of Brazil, Canoas, RS (Brazil). Dental School. Dept. of Dentistry and Endodontics]. E-mail: fbarletta@terra.com.br; Hartmann, Mateus Silveira Martins [Uninga Dental School, Passo Fundo, RS (Brazil). Postgraduate Program in Dentistry
2008-07-01
This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend ({alpha}=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)
Directory of Open Access Journals (Sweden)
Sharifah Bee Abdul Hamid
2014-04-01
Full Text Available This study examines the feasibility of catalytically pretreated biochar derived from the dried exocarp or fruit peel of mangostene with Group I alkali metal hydroxide (KOH. The pretreated char was activated in the presence of carbon dioxide gas flow at high temperature to upgrade its physiochemical properties for the removal of copper, Cu(II cations in single solute system. The effect of three independent variables, including temperature, agitation time and concentration, on sorption performance were carried out. Reaction kinetics parameters were determined by using linear regression analysis of the pseudo first, pseudo second, Elovich and intra-particle diffusion models. The regression co-efficient, R2 values were best for the pseudo second order kinetic model for all the concentration ranges under investigation. This implied that Cu(II cations were adsorbed mainly by chemical interactions with the surface active sites of the activated biochar. Langmuir, Freundlich and Temkin isotherm models were used to interpret the equilibrium data at different temperature. Thermodynamic studies revealed that the sorption process was spontaneous and endothermic. The surface area of the activated sample was 367.10 m2/g, whereas before base activation, it was only 1.22 m2/g. The results elucidated that the base pretreatment was efficient enough to yield porous carbon with an enlarged surface area, which can successfully eliminate Cu(II cations from waste water.
Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E
2013-04-01
Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.
NNLO computational techniques: The cases H→γγ and H→gg
Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro
2009-04-01
A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV500 GeV, without incongruent large effects around the physical electroweak thresholds, thereby showing that only a complete implementation of the computational scheme keeps two-loop corrections under control.
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
An innovative privacy preserving technique for incremental datasets on cloud computing.
Aldeen, Yousra Abdul Alsahib S; Salleh, Mazleena; Aljeroudi, Yazan
2016-08-01
Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.
Applications of soft computing in time series forecasting simulation and modeling techniques
Singh, Pritpal
2016-01-01
This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...
Ion exchange equilibrium constants
Marcus, Y
2013-01-01
Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and
Quantity Constrained General Equilibrium
Babenko, R.; Talman, A.J.J.
2006-01-01
In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In
Evaluation of computer-based NDE techniques and regional support of inspection activities
International Nuclear Information System (INIS)
Taylor, T.T.; Kurtz, R.J.; Heasler, P.G.; Doctor, S.R.
1991-01-01
This paper describes the technical progress during fiscal year 1990 for the program entitled 'Evaluation of Computer-Based nondestructive evaluation (NDE) Techniques and Regional Support of Inspection Activities.' Highlights of the technical progress include: development of a seminar to provide basic knowledge required to review and evaluate computer-based systems; review of a typical computer-based field procedure to determine compliance with applicable codes, ambiguities in procedure guidance, and overall effectiveness and utility; design and fabrication of a series of three test blocks for NRC staff use for training or audit of UT systems; technical assistance in reviewing (1) San Onofre ten year reactor pressure vessel inservice inspection activities and (2) the capability of a proposed phased array inspection of the feedwater nozzle at Oyster Creek; completion of design calculations to determine the feasibility and significance of various sizes of mockup assemblies that could be used to evaluate the effectiveness of eddy current examinations performed on steam generators; and discussion of initial mockup design features and methods for fabricating flaws in steam generator tubes
Review on the applications of the very high speed computing technique to atomic energy field
International Nuclear Information System (INIS)
Hoshino, Tsutomu
1981-01-01
The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)
Moro, A. C.; Nadesh, R. K.
2017-11-01
The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.
Learning-based computing techniques in geoid modeling for precise height transformation
Erol, B.; Erol, S.
2013-03-01
Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.
International Nuclear Information System (INIS)
Phelps, M.E.; Huang, S.C.; Hoffman, E.J.; Plummer, D.; Carson, R.
1981-01-01
Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described
Energy Technology Data Exchange (ETDEWEB)
Kampschulte, M.; Sender, J.; Litzlbauer, H.D.; Althoehn, U.; Schwab, J.D.; Alejandre-Lafont, E.; Martels, G.; Krombach, G.A. [University Hospital Giessen (Germany). Dept. of Diagnostic and Interventional Radiology; Langheinirch, A.C. [BG Trauma Hospital Frankfurt/Main (Germany). Dept. of Diagnostic and Interventional Radiology
2016-02-15
Nano-computed tomography (nano-CT) is an emerging, high-resolution cross-sectional imaging technique and represents a technical advancement of the established micro-CT technology. Based on the application of a transmission target X-ray tube, the focal spot size can be decreased down to diameters less than 400 nanometers (nm). Together with specific detectors and examination protocols, a superior spatial resolution up to 400 nm (10 % MTF) can be achieved, thereby exceeding the resolution capacity of typical micro-CT systems. The technical concept of nano-CT imaging as well as the basics of specimen preparation are demonstrated exemplarily. Characteristics of atherosclerotic plaques (intraplaque hemorrhage and calcifications) in a murine model of atherosclerosis (ApoE{sub (-/-)}/LDLR{sub (-/-)} double knockout mouse) are demonstrated in the context of superior spatial resolution in comparison to micro-CT. Furthermore, this article presents the application of nano-CT for imaging cerebral microcirculation (murine), lung structures (porcine), and trabecular microstructure (ovine) in contrast to micro-CT imaging. This review shows the potential of nano-CT as a radiological method in biomedical basic research and discusses the application of experimental, high resolution CT techniques in consideration of other high resolution cross-sectional imaging techniques.
Gunjan, Vinit
2015-01-01
This Brief highlights Informatics and related techniques to Computer Science Professionals, Engineers, Medical Doctors, Bioinformatics researchers and other interdisciplinary researchers. Chapters include the Bioinformatics of Diabetes and several computational algorithms and statistical analysis approach to effectively study the disorders and possible causes along with medical applications.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an
International Nuclear Information System (INIS)
Huda, Walter; Lieberman, Kristin A.; Chang, Jack; Roskopf, Marsha L.
2004-01-01
We investigated how patient age, size and composition, together with the choice of x-ray technique factors, affect radiation doses in head computed tomography (CT) examinations. Head size dimensions, cross-sectional areas, and mean Hounsfield unit (HU) values were obtained from head CT images of 127 patients. For radiation dosimetry purposes patients were modeled as uniform cylinders of water. Dose computations were performed for 18x7 mm sections, scanned at a constant 340 mAs, for x-ray tube voltages ranging from 80 to 140 kV. Values of mean section dose, energy imparted, and effective dose were computed for patients ranging from the newborn to adults. There was a rapid growth of head size over the first two years, followed by a more modest increase of head size until the age of 18 or so. Newborns have a mean HU value of about 50 that monotonically increases with age over the first two decades of life. Average adult A-P and lateral dimensions were 186±8 mm and 147±8 mm, respectively, with an average HU value of 209±40. An infant head was found to be equivalent to a water cylinder with a radius of ∼60 mm, whereas an adult head had an equivalent radius 50% greater. Adult males head dimensions are about 5% larger than for females, and their average x-ray attenuation is ∼20 HU greater. For adult examinations performed at 120 kV, typical values were 32 mGy for the mean section dose, 105 mJ for the total energy imparted, and 0.64 mSv for the effective dose. Increasing the x-ray tube voltage from 80 to 140 kV increases patient doses by about a factor of 5. For the same technique factors, mean section doses in infants are 35% higher than in adults. Energy imparted for adults is 50% higher than for infants, but infant effective doses are four times higher than for adults. CT doses need to take into account patient age, head size, and composition as well as the selected x-ray technique factors
Anderson, B. H.; Putt, C. W.; Giamati, C. C.
1981-01-01
Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.
Duz, Marco; Marshall, John F; Parkin, Tim
2017-06-29
The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free
A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.
Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit
2017-07-01
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study
Directory of Open Access Journals (Sweden)
Carlos Figuera
2016-10-01
Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero
A practical technique for benefit-cost analysis of computer-aided design and drafting systems
International Nuclear Information System (INIS)
Shah, R.R.; Yan, G.
1979-03-01
Analysis of benefits and costs associated with the operation of Computer-Aided Design and Drafting Systems (CADDS) are needed to derive economic justification for acquiring new systems, as well as to evaluate the performance of existing installations. In practice, however, such analyses are difficult to perform since most technical and economic advantages of CADDS are ΣirreduciblesΣ, i.e. cannot be readily translated into monetary terms. In this paper, a practical technique for economic analysis of CADDS in a drawing office environment is presented. A Σworst caseΣ approach is taken since increase in productivity of existing manpower is the only benefit considered, while all foreseen costs are taken into account. Methods of estimating benefits and costs are described. The procedure for performing the analysis is illustrated by a case study based on the drawing office activities at Atomic Energy of Canada Limited. (auth)
Directory of Open Access Journals (Sweden)
Kai-Chun Chang
2012-01-01
Full Text Available Programmed ribosomal frameshifting (PRF serves as an intrinsic translational regulation mechanism employed by some viruses to control the ratio between structural and enzymatic proteins. Most viral mRNAs which use PRF adapt an H-type pseudoknot to stimulate −1 PRF. The relationship between the thermodynamic stability and the frameshifting efficiency of pseudoknots has not been fully understood. Recently, single-molecule force spectroscopy has revealed that the frequency of −1 PRF correlates with the unwinding forces required for disrupting pseudoknots, and that some of the unwinding work dissipates irreversibly due to the torsional restraint of pseudoknots. Complementary to single-molecule techniques, computational modeling provides insights into global motions of the ribosome, whose structural transitions during frameshifting have not yet been elucidated in atomic detail. Taken together, recent advances in biophysical tools may help to develop antiviral therapies that target the ubiquitous −1 PRF mechanism among viruses.
[Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].
Uesawa, Yoshihiro
2018-01-01
Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.
Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.
2013-05-01
Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.
Quantification of ventilated facade efficiency by using computational fluid mechanics techniques
International Nuclear Information System (INIS)
Mora Perez, M.; Lopez Patino, G.; Bengochea Escribano, M. A.; Lopez Jimenez, P. A.
2011-01-01
In some countries, summer over-heating is a big problem in a buildings energy balance. Ventilated facades are a useful tool when applied to building design, especially in bio climatic building design. A ventilated facade is a complex, multi-layer structural solution that enables dry installation of the covering elements. The objective of this paper is to quantify the efficiency improvement in the building thermal when this sort of facade is installed. These improvements are due to convection produced in the air gap of the facade. This convection depends on the air movement inside the gap and the heat transmission in this motion. These quantities are mathematically modelled by Computational Fluid Dynamics (CFD) techniques using a commercial code: STAR CCM+. The proposed method allows an assessment of the energy potential of the ventilated facade and its capacity for cooling. (Author) 23 refs.
Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz
2013-09-01
Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.
Techniques for computing reactivity changes caused by fuel axial expansion in LMR's
International Nuclear Information System (INIS)
Khalil, H.
1988-01-01
An evaluation is made of the accuracy of methods used to compute reactivity changes caused by axial fuel relocation in fast reactors. Results are presented to demonstrate the validity of assumptions commonly made such as linearity of reactivity with fuel elongation, additivity of local reactivity contributions, and the adequacy of standard perturbation techniques. Accurate prediction of the reactivity loss caused by axial swelling of metallic fuel is shown to require proper representation of the burnup dependence of the expansion reactivity. Some accuracy limitations in the methods used in transient analyses, which are based on the use of fuel worth tables, are identified, and efficient ways to improve accuracy are described. Implementation of these corrections produced expansion reactivity estimates within 5% of higher-order method for a metal-fueled FFTF core representation. 18 refs., 3 figs., 3 tabs
International Nuclear Information System (INIS)
Yuan, Minghu; Feng, Liqiang; Lü, Rui; Chu, Tianshu
2014-01-01
We show that by introducing Wigner rotation technique into the solution of time-dependent Schrödinger equation in length gauge, computational efficiency can be greatly improved in describing atoms in intense few-cycle circularly polarized laser pulses. The methodology with Wigner rotation technique underlying our openMP parallel computational code for circularly polarized laser pulses is described. Results of test calculations to investigate the scaling property of the computational code with the number of the electronic angular basis function l as well as the strong field phenomena are presented and discussed for the hydrogen atom
Directory of Open Access Journals (Sweden)
Santosh Bhattarai
2017-07-01
Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.
Wheeze sound analysis using computer-based techniques: a systematic review.
Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian
2017-10-31
Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.
International Nuclear Information System (INIS)
Ghasemian, Masoud; Ashrafi, Z. Najafian; Sedaghat, Ahmad
2017-01-01
Highlights: • A review on CFD simulation technique for Darrieus wind turbines is provided. • Recommendations and guidelines toward reliable and accurate simulations are presented. • Different progresses in CFD simulation of Darrieus wind turbines are addressed. - Abstract: The global warming threats, the presence of policies on support of renewable energies, and the desire for clean smart cities are the major drives for most recent researches on developing small wind turbines in urban environments. VAWTs (vertical axis wind turbines) are most appealing for energy harvesting in the urban environment. This is attributed due to structural simplicity, wind direction independency, no yaw mechanism required, withstand high turbulence winds, cost effectiveness, easier maintenance, and lower noise emission of VAWTs. This paper reviews recent published works on CFD (computational fluid dynamic) simulations of Darrieus VAWTs. Recommendations and guidelines are presented for turbulence modeling, spatial and temporal discretization, numerical schemes and algorithms, and computational domain size. The operating and geometrical parameters such as tip speed ratio, wind speed, solidity, blade number and blade shapes are fully investigated. The purpose is to address different progresses in simulations areas such as blade profile modification and optimization, wind turbine performance augmentation using guide vanes, wind turbine wake interaction in wind farms, wind turbine aerodynamic noise reduction, dynamic stall control, self-starting characteristics, and effects of unsteady and skewed wind conditions.
The use of automatic programming techniques for fault tolerant computing systems
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
Directory of Open Access Journals (Sweden)
B. Raja Singh
2015-01-01
Full Text Available Pulverised coal preparation system (Coal mills is the heart of coal-fired power plants. The complex nature of a milling process, together with the complex interactions between coal quality and mill conditions, would lead to immense difficulties for obtaining an effective mathematical model of the milling process. In this paper, vertical spindle coal mills (bowl mill that are widely used in coal-fired power plants, is considered for the model development and its pulverised fuel flow rate is computed using the model. For the steady state coal mill model development, plant measurements such as air-flow rate, differential pressure across mill etc., are considered as inputs/outputs. The mathematical model is derived from analysis of energy, heat and mass balances. An Evolutionary computation technique is adopted to identify the unknown model parameters using on-line plant data. Validation results indicate that this model is accurate enough to represent the whole process of steady state coal mill dynamics. This coal mill model is being implemented on-line in a 210 MW thermal power plant and the results obtained are compared with plant data. The model is found accurate and robust that will work better in power plants for system monitoring. Therefore, the model can be used for online monitoring, fault detection, and control to improve the efficiency of combustion.
Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography
International Nuclear Information System (INIS)
Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka
1989-01-01
Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)
Brain-computer interface: changes in performance using virtual reality techniques.
Ron-Angevin, Ricardo; Díaz-Estrella, Antonio
2009-01-09
The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.
International Nuclear Information System (INIS)
Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern
2010-01-01
This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.
a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques
Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.
2016-06-01
In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.
Aldeen Yousra, S.; Mazleena, Salleh
2018-05-01
Recent advancement in Information and Communication Technologies (ICT) demanded much of cloud services to sharing users’ private data. Data from various organizations are the vital information source for analysis and research. Generally, this sensitive or private data information involves medical, census, voter registration, social network, and customer services. Primary concern of cloud service providers in data publishing is to hide the sensitive information of individuals. One of the cloud services that fulfill the confidentiality concerns is Privacy Preserving Data Mining (PPDM). The PPDM service in Cloud Computing (CC) enables data publishing with minimized distortion and absolute privacy. In this method, datasets are anonymized via generalization to accomplish the privacy requirements. However, the well-known privacy preserving data mining technique called K-anonymity suffers from several limitations. To surmount those shortcomings, I propose a new heuristic anonymization framework for preserving the privacy of sensitive datasets when publishing on cloud. The advantages of K-anonymity, L-diversity and (α, k)-anonymity methods for efficient information utilization and privacy protection are emphasized. Experimental results revealed the superiority and outperformance of the developed technique than K-anonymity, L-diversity, and (α, k)-anonymity measure.
A HOLISTIC APPROACH FOR INSPECTION OF CIVIL INFRASTRUCTURES BASED ON COMPUTER VISION TECHNIQUES
Directory of Open Access Journals (Sweden)
C. Stentoumis
2016-06-01
Full Text Available In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.
Directory of Open Access Journals (Sweden)
Priscila Silveira Salvadori
2013-04-01
Full Text Available OBJETIVO: Avaliar a necessidade de realização da fase de equilíbrio nos exames de tomografia computadorizada de abdome. MATERIAIS E MÉTODOS: Realizou-se estudo retrospectivo, transversal e observacional, avaliando 219 exames consecutivos de tomografia computadorizada de abdome com contraste intravenoso, realizados num período de três meses, com diversas indicações clínicas. Para cada exame foram emitidos dois pareceres, um avaliando o exame sem a fase de equilíbrio (primeira análise e o outro avaliando todas as fases em conjunto (segunda análise. Ao final de cada avaliação, foi estabelecido se houve mudança nos diagnósticos principais e secundários, entre a primeira e a segunda análise. Foi utilizada a extensão do teste exato de Fisher para avaliar a modificação dos diagnósticos principais (p 0,999. Com relação aos diagnósticos secundários, cinco exames (2,3% foram modificados. CONCLUSÃO: Para indicações clínicas como estadiamento tumoral, abdome agudo e pesquisa de coleção abdominal, a fase de equilíbrio não acrescenta contribuição diagnóstica expressiva, podendo ser suprimida dos protocolos de exame.OBJECTIVE: To evaluate the role of the equilibrium phase in abdominal computed tomography. MATERIALS AND METHODS: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis, and a second reading of these phases added to the equilibrium phase (second analysis. At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher
International Nuclear Information System (INIS)
McLinden, Mark O.; Richter, Markus
2016-01-01
Highlights: • A new technique for detecting dew points in fluid mixtures is described. • The method makes use of a two-sinker densimeter. • The technique is based on a quantitative measurement of sample mass adsorbed onto the surface of the densimeter sinkers. • The dew-point density and dew-point pressure are determined with low uncertainty. • The method is applied to the (methane + propane) system and compared to traditional methods. - Abstract: We explore a novel method for determining the dew-point density and dew-point pressure of fluid mixtures and compare it to traditional methods. The (p, ρ, T, x) behavior of three (methane + propane) mixtures was investigated with a two-sinker magnetic suspension densimeter over the temperature range of (248.15–293.15) K; the measurements extended from low pressures into the two-phase region. The compositions of the gravimetrically prepared mixtures were (0.74977, 0.50688, and 0.26579) mole fraction methane. We analyzed isothermal data by: (1) a “traditional” analysis of the intersection of a virial fit of the (p vs. ρ) data in the single-phase region with a linear fit of the data in the two-phase region; and (2) an analysis of the adsorbed mass on the sinker surfaces. We compared these to a traditional isochoric experiment. We conclude that the “adsorbed mass” analysis of an isothermal experiment provides an accurate determination of the dew-point temperature, pressure, and density. However, a two-sinker densimeter is required.
Computed tomography automatic exposure control techniques in 18F-FDG oncology PET-CT scanning.
Iball, Gareth R; Tout, Deborah
2014-04-01
Computed tomography (CT) automatic exposure control (AEC) systems are now used in all modern PET-CT scanners. A collaborative study was undertaken to compare AEC techniques of the three major PET-CT manufacturers for fluorine-18 fluorodeoxyglucose half-body oncology imaging. An audit of 70 patients was performed for half-body CT scans taken on a GE Discovery 690, Philips Gemini TF and Siemens Biograph mCT (all 64-slice CT). Patient demographic and dose information was recorded and image noise was calculated as the SD of Hounsfield units in the liver. A direct comparison of the AEC systems was made by scanning a Rando phantom on all three systems for a range of AEC settings. The variation in dose and image quality with patient weight was significantly different for all three systems, with the GE system showing the largest variation in dose with weight and Philips the least. Image noise varied with patient weight in Philips and Siemens systems but was constant for all weights in GE. The z-axis mA profiles from the Rando phantom demonstrate that these differences are caused by the nature of the tube current modulation techniques applied. The mA profiles varied considerably according to the AEC settings used. CT AEC techniques from the three manufacturers yield significantly different tube current modulation patterns and hence deliver different doses and levels of image quality across a range of patient weights. Users should be aware of how their system works and of steps that could be taken to optimize imaging protocols.
Computing Nash equilibria through computational intelligence methods
Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.
2005-03-01
Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.
International Nuclear Information System (INIS)
Balter, H.S.
1994-01-01
This work studies the behaviour of radionuclides when it produce a desintegration activity,decay and the isotopes stable creation. It gives definitions about the equilibrium between activity of parent and activity of the daughter, radioactive decay,isotope stable and transient equilibrium and maxim activity time. Some considerations had been given to generators that permit a disgregation of two radioisotopes in equilibrium and its good performance. Tabs
Experimental determination of thermodynamic equilibrium in biocatalytic transamination
DEFF Research Database (Denmark)
Tufvesson, Pär; Jensen, Jacob Skibsted; Kroutil, Wolfgang
2012-01-01
The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones....... Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore...
Energy Technology Data Exchange (ETDEWEB)
Parat, Corinne, E-mail: corinne.parat@univ-pau.fr [Université de Pau et des Pays de l’Adour, CNRS UMR 5254, LCABIE, 64000 Pau (France); Pinheiro, J.P. [Université de Lorraine/ENSG, CNRS UMR 7360, LIEC, 54500 Nancy (France)
2015-10-08
This work presents the development of a new probe (ISIDORE probe) based on the hyphenation of a Donnan Membrane Technique device (DMT) to a screen-printed electrode through a flow-cell to determine the free zinc, cadmium and lead ion concentration in natural samples, such as a freshwater river. The probe displays many advantages namely: (i) the detection can be performed on-site, which avoids all problems inherent to sampling, transport and storage; (ii) the low volume of the acceptor solution implies shorter equilibration times; (ii) the electrochemical detection system allows monitoring the free ion concentration in the acceptor solution without sampling. - Highlights: • A new probe has been developed for on-site analyses of free metal ion. • A screen-printed electrode has been hyphenated to a DMT device. • Analysis time has been reduced to 6H against 36H when using a classical DMT device. • This new probe has been successfully applied on a synthetic freshwater sample.
Ch. 33 Modeling: Computational Thermodynamics
International Nuclear Information System (INIS)
Besmann, Theodore M.
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Directory of Open Access Journals (Sweden)
Nicholas V Olijnyk
Full Text Available This study performed two phases of analysis to shed light on the performance and thematic evolution of China's quantum cryptography (QC research. First, large-scale research publication metadata derived from QC research published from 2001-2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China's QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China's performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China's performance indicators (i.e., Publication Frequency, citation score, H-index are growing. China's H-index (a normalized indicator has surpassed all other countries' over the last several years. The second phase of analysis shows how China's main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures; some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state, while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation. Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China's QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology
Olijnyk, Nicholas V
2018-01-01
This study performed two phases of analysis to shed light on the performance and thematic evolution of China's quantum cryptography (QC) research. First, large-scale research publication metadata derived from QC research published from 2001-2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China's QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China's performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China's performance indicators (i.e., Publication Frequency, citation score, H-index) are growing. China's H-index (a normalized indicator) has surpassed all other countries' over the last several years. The second phase of analysis shows how China's main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures); some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state), while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation). Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China's QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology policy researchers
Prediction of monthly regional groundwater levels through hybrid soft-computing techniques
Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng
2016-10-01
Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.
Experimental investigation of liquid chromatography columns by means of computed tomography
DEFF Research Database (Denmark)
Astrath, D.U.; Lottes, F.; Vu, Duc Thuong
2007-01-01
The efficiency of packed chromatographic columns was investigated experimentally by means of computed tomography (CT) techniques. The measurements were carried out by monitoring tracer fronts in situ inside the chromatographic columns. The experimental results were fitted using the equilibrium di...
Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles
Directory of Open Access Journals (Sweden)
M. Rahman
2005-01-01
Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the
DEFF Research Database (Denmark)
Videbaek, C; Friberg, L; Holm, S
1993-01-01
twice, once without receptor blockade and once with a constant degree of partial blockade of the benzodiazepine receptors by infusion of nonradioactive flumazenil (Lanexat) or midazolam (Dormicum). Single photon emission computer tomography and blood sampling were performed intermittently for 6 h after...
Evaluation of two iterative techniques for reducing metal artifacts in computed tomography.
Boas, F Edward; Fleischmann, Dominik
2011-06-01
To evaluate two methods for reducing metal artifacts in computed tomography (CT)--the metal deletion technique (MDT) and the selective algebraic reconstruction technique (SART)--and compare these methods with filtered back projection (FBP) and linear interpolation (LI). The institutional review board approved this retrospective HIPAA-compliant study; informed patient consent was waived. Simulated projection data were calculated for a phantom that contained water, soft tissue, bone, and iron. Clinical projection data were obtained retrospectively from 11 consecutively identified CT scans with metal streak artifacts, with a total of 178 sections containing metal. Each scan was reconstructed using FBP, LI, SART, and MDT. The simulated scans were evaluated quantitatively by calculating the average error in Hounsfield units for each pixel compared with the original phantom. Two radiologists who were blinded to the reconstruction algorithms used qualitatively evaluated the clinical scans, ranking the overall severity of artifacts for each algorithm. P values for comparisons of the image quality ranks were calculated from the binomial distribution. The simulations showed that MDT reduces artifacts due to photon starvation, beam hardening, and motion and does not introduce new streaks between metal and bone. MDT had the lowest average error (76% less than FBP, 42% less than LI, 17% less than SART). Blinded comparison of the clinical scans revealed that MDT had the best image quality 100% of the time (95% confidence interval: 72%, 100%). LI had the second best image quality, and SART and FBP had the worst image quality. On images from two CT scans, as compared with images generated by the scanner, MDT revealed information of potential clinical importance. For a wide range of scans, MDT yields reduced metal streak artifacts and better-quality images than does FBP, LI, or SART. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101782/-/DC1. RSNA, 2011
Directory of Open Access Journals (Sweden)
Ehsan Olyaie
2017-05-01
Full Text Available Most of the water quality models previously developed and used in dissolved oxygen (DO prediction are complex. Moreover, reliable data available to develop/calibrate new DO models is scarce. Therefore, there is a need to study and develop models that can handle easily measurable parameters of a particular site, even with short length. In recent decades, computational intelligence techniques, as effective approaches for predicting complicated and significant indicator of the state of aquatic ecosystems such as DO, have created a great change in predictions. In this study, three different AI methods comprising: (1 two types of artificial neural networks (ANN namely multi linear perceptron (MLP and radial based function (RBF; (2 an advancement of genetic programming namely linear genetic programming (LGP; and (3 a support vector machine (SVM technique were used for DO prediction in Delaware River located at Trenton, USA. For evaluating the performance of the proposed models, root mean square error (RMSE, Nash–Sutcliffe efficiency coefficient (NS, mean absolute relative error (MARE and, correlation coefficient statistics (R were used to choose the best predictive model. The comparison of estimation accuracies of various intelligence models illustrated that the SVM was able to develop the most accurate model in DO estimation in comparison to other models. Also, it was found that the LGP model performs better than the both ANNs models. For example, the determination coefficient was 0.99 for the best SVM model, while it was 0.96, 0.91 and 0.81 for the best LGP, MLP and RBF models, respectively. In general, the results indicated that an SVM model could be employed satisfactorily in DO estimation.
Directory of Open Access Journals (Sweden)
Srikanth Prabhu
2012-02-01
Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.
Patient size and x-ray technique factors in head computed tomography examinations. II. Image quality
International Nuclear Information System (INIS)
Huda, Walter; Lieberman, Kristin A.; Chang, Jack; Roskopf, Marsha L.
2004-01-01
We investigated how patient head characteristics, as well as the choice of x-ray technique factors, affect lesion contrast and noise values in computed tomography (CT) images. Head sizes and mean Hounsfield unit (HU) values were obtained from head CT images for five classes of patients ranging from the newborn to adults. X-ray spectra with tube voltages ranging from 80 to 140 kV were used to compute the average photon energy, and energy fluence, transmitted through the heads of patients of varying size. Image contrast, and the corresponding contrast to noise ratios (CNRs), were determined for lesions of fat, muscle, and iodine relative to a uniform water background. Maintaining a constant image CNR for each lesion, the patient energy imparted was also computed to identify the x-ray tube voltage that minimized the radiation dose. For adults, increasing the tube voltage from 80 to 140 kV changed the iodine HU from 2.62x10 5 to 1.27x10 5 , the fat HU from -138 to -108, and the muscle HU from 37.1 to 33.0. Increasing the x-ray tube voltage from 80 to 140 kV increased the percentage energy fluence transmission by up to a factor of 2. For a fixed x-ray tube voltage, the percentage transmitted energy fluence in adults was more than a factor of 4 lower than for newborns. For adults, increasing the x-ray tube voltage from 80 to 140 kV improved the CNR for muscle lesions by 130%, for fat lesions by a factor of 2, and for iodine lesions by 25%. As the size of the patient increased from newborn to adults, lesion CNR was reduced by about a factor of 2. The mAs value can be reduced by 80% when scanning newborns while maintaining the same lesion CNR as for adults. Maintaining the CNR of an iodine lesion at a constant level, use of 140 kV increases the energy imparted to an adult patient by nearly a factor of 3.5 in comparison to 80 kV. For fat and muscle lesions, raising the x-ray tube voltage from 80 to 140 kV at a constant CNR increased the patient dose by 37% and 7
Equivalence of Equilibrium Propagation and Recurrent Backpropagation
Scellier, Benjamin; Bengio, Yoshua
2017-01-01
Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms....
Equilibrium fluctuation energy of gyrokinetic plasma
International Nuclear Information System (INIS)
Krommes, J.A.; Lee, W.W.; Oberman, C.
1985-11-01
The thermal equilibrium electric field fluctuation energy of the gyrokinetic model of magnetized plasma is computed, and found to be smaller than the well-known result (k)/8π = 1/2T/[1 + (klambda/sub D/) 2 ] valid for arbitrarily magnetized plasmas. It is shown that, in a certain sense, the equilibrium electric field energy is minimum in the gyrokinetic regime. 13 refs., 2 figs
Kleppe, J.; Borm, P.E.M.; Hendrickx, R.L.P.
2008-01-01
Fall back equilibrium is a refinement of the Nash equilibrium concept. In the underly- ing thought experiment each player faces the possibility that, after all players decided on their action, his chosen action turns out to be blocked. Therefore, each player has to decide beforehand on a back-up
A New Screening Methodology for Improved Oil Recovery Processes Using Soft-Computing Techniques
Parada, Claudia; Ertekin, Turgay
2010-05-01
The first stage of production of any oil reservoir involves oil displacement by natural drive mechanisms such as solution gas drive, gas cap drive and gravity drainage. Typically, improved oil recovery (IOR) methods are applied to oil reservoirs that have been depleted naturally. In more recent years, IOR techniques are applied to reservoirs even before their natural energy drive is exhausted by primary depletion. Descriptive screening criteria for IOR methods are used to select the appropriate recovery technique according to the fluid and rock properties. This methodology helps in assessing the most suitable recovery process for field deployment of a candidate reservoir. However, the already published screening guidelines neither provide information about the expected reservoir performance nor suggest a set of project design parameters, which can be used towards the optimization of the process. In this study, artificial neural networks (ANN) are used to build a high-performance neuro-simulation tool for screening different improved oil recovery techniques: miscible injection (CO2 and N2), waterflooding and steam injection processes. The simulation tool consists of proxy models that implement a multilayer cascade feedforward back propagation network algorithm. The tool is intended to narrow the ranges of possible scenarios to be modeled using conventional simulation, reducing the extensive time and energy spent in dynamic reservoir modeling. A commercial reservoir simulator is used to generate the data to train and validate the artificial neural networks. The proxy models are built considering four different well patterns with different well operating conditions as the field design parameters. Different expert systems are developed for each well pattern. The screening networks predict oil production rate and cumulative oil production profiles for a given set of rock and fluid properties, and design parameters. The results of this study show that the networks are
Directory of Open Access Journals (Sweden)
Nicolette Cassel
2013-05-01
Full Text Available Computed tomography thoracic angiography studies were performed on five adult beagles using the bolus tracking (BT technique and the test bolus (TB technique, which were performed at least two weeks apart. For the BT technique, 2 mL/kg of 300 mgI/mL iodinated contrast agent was injected intravenously. Scans were initiated when the contrast in the aorta reached 150 Hounsfield units (HU. For the TB technique, the dogs received a test dose of 15% of 2 mL/kg of 300 mgI/mL iodinated contrast agent, followed by a series of low dose sequential scans. The full dose of the contrast agent was then administered and the scans were conducted at optimal times as identified from time attenuation curves. Mean attenuation in HU was measured in the aorta (Ao and right caudal pulmonary artery (rCPA. Additional observations included the study duration, milliAmpere (mA, computed tomography dose index volume (CTDI[vol] and dose length product (DLP. The attenuation in the Ao (BT = 660 52 HU ± 138 49 HU, TB = 469 82 HU ± 199 52 HU, p = 0.13 and in the rCPA (BT = 606 34 HU ± 143 37 HU, TB = 413 72 HU ± 174.99 HU, p = 0.28 did not differ significantly between the two techniques. The BT technique was conducted in a significantly shorter time period than the TB technique (p = 0.03. The mean mA for the BT technique was significantly lower than the TB technique (p = 0.03, as was the mean CTDI(vol (p = 0.001. The mean DLP did not differ significantly between the two techniques (p = 0.17. No preference was given to either technique when evaluating the Ao or rCPA but the BT technique was shown to be shorter in duration and resulted in less DLP than the TB technique.
Soft Computing Technique and Conventional Controller for Conical Tank Level Control
Directory of Open Access Journals (Sweden)
Sudharsana Vijayan
2016-03-01
Full Text Available In many process industries the control of liquid level is mandatory. But the control of nonlinear process is difficult. Many process industries use conical tanks because of its non linear shape contributes better drainage for solid mixtures, slurries and viscous liquids. So, control of conical tank level is a challenging task due to its non-linearity and continually varying cross-section. This is due to relationship between controlled variable level and manipulated variable flow rate, which has a square root relationship. The main objective is to execute the suitable controller for conical tank system to maintain the desired level. System identification of the non-linear process is done using black box modelling and found to be first order plus dead time (FOPDT model. In this paper it is proposed to obtain the mathematical modelling of a conical tank system and to study the system using block diagram after that soft computing technique like fuzzy and conventional controller is also used for the comparison.
Navard, Sharon E.
1989-01-01
In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.
Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah
2017-05-01
The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.
Alexandre Teixeira, César; Direito, Bruno; Bandarabadi, Mojtaba; Le Van Quyen, Michel; Valderrama, Mario; Schelter, Bjoern; Schulze-Bonhage, Andreas; Navarro, Vincent; Sales, Francisco; Dourado, António
2014-05-01
The ability of computational intelligence methods to predict epileptic seizures is evaluated in long-term EEG recordings of 278 patients suffering from pharmaco-resistant partial epilepsy, also known as refractory epilepsy. This extensive study in seizure prediction considers the 278 patients from the European Epilepsy Database, collected in three epilepsy centres: Hôpital Pitié-là-Salpêtrière, Paris, France; Universitätsklinikum Freiburg, Germany; Centro Hospitalar e Universitário de Coimbra, Portugal. For a considerable number of patients it was possible to find a patient specific predictor with an acceptable performance, as for example predictors that anticipate at least half of the seizures with a rate of false alarms of no more than 1 in 6 h (0.15 h⁻¹). We observed that the epileptic focus localization, data sampling frequency, testing duration, number of seizures in testing, type of machine learning, and preictal time influence significantly the prediction performance. The results allow to face optimistically the feasibility of a patient specific prospective alarming system, based on machine learning techniques by considering the combination of several univariate (single-channel) electroencephalogram features. We envisage that this work will serve as benchmark data that will be of valuable importance for future studies based on the European Epilepsy Database. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei
2018-02-01
Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
Jules, Kenol; Lin, Paul P.
2001-01-01
This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.
International Nuclear Information System (INIS)
Figedy, Stefan; Smiesko, Ivan
2012-01-01
This article provides brief information about the fundamental features of a newly-developed diagnostic system for early detection and identification of anomalies being generated in water chemistry regime of the primary and secondary circuit of the VVER-440 reactor. This system, which is called SACHER (System of Analysis of CHEmical Regime), was installed within the major modernization project at the NPP-V2 Bohunice in the Slovak Republic. The SACHER system has been fully developed on MATLAB environment. It is based on computational intelligence techniques and inserts various elements of intelligent data processing modules for clustering, diagnosing, future prediction, signal validation, etc, into the overall chemical information system. The application of SACHER would essentially assist chemists to identify the current situation regarding anomalies being generated in the primary and secondary circuit water chemistry. This system is to be used for diagnostics and data handling, however it is not intended to fully replace the presence of experienced chemists to decide upon corrective actions. (author)
Measurement of liver and spleen volume by computed tomography using point counting technique
International Nuclear Information System (INIS)
Matsuda, Yoshiro; Sato, Hiroyuki; Nei, Jinichi; Takada, Akira
1982-01-01
We devised a new method for measurement of liver and spleen volume by computed tomography using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method were significantly correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indicating clinical usefulness of this method. Hepatic and splenic volumes were estimated by this method in 43 patients with chronic liver disease and 9 subjects with non-hepatobiliary disease. The mean hepatic volume in non-alcoholic liver cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. The mean hepatic volume in alcoholic cirrhosis and alcoholic fibrosis tended to be slightly larger than that in non-hepatobiliary disease. The mean splenic volume in liver cirrhosis was significantly larger than those in non-hepatobiliary disease and other chronic liver diseases. However, there was no significant difference of the mean splenic volume between alcoholic and non-alcoholic cirrhosis. Significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis, but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiological condition of chronic liver diseases. (author)
New evaluation methods for conceptual design selection using computational intelligence techniques
Energy Technology Data Exchange (ETDEWEB)
Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)
2013-03-15
The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.
New evaluation methods for conceptual design selection using computational intelligence techniques
International Nuclear Information System (INIS)
Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua
2013-01-01
The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.
Three-dimensional demonstration of liver and spleen by computer graphics technique
International Nuclear Information System (INIS)
Kashiwagi, Toru; Azuma, Masayoshi; Katayama, Kazuhiro; Yoshioka, Hiroaki; Ishizu, Hiromi; Mitsutani, Natsuki; Koizumi, Takao; Takayama, Ichiro
1987-01-01
Three-dimensional demonstration system of the liver and spleen has been developed using computer graphics technique. Three-dimensional models were constructed from CT images of the organ surface. The three-dimensional images were displayed as wire-frame and/or solid models on the color CRT. The anatomical surface of the liver and spleen was realistically viewed from any direction. In liver cirrhosis, atrophy of the right lobe, hypertrophy of the left lobe and splenomegaly were displayed vividly. The liver and hepatoma were displayed as wire-frame and solid models respectively on the same image. This combined display clarified the intrahepatic location of hepatoma together with configuration of liver and hepatoma. Furthermore, superimposed display of three dimensional models and celiac angiogram enabled us to understand the location and configuration of lesions more easily than the original CT data or angiogram alone. Therefore, it is expected that this system is clinically useful for noninvasive evaluation of patho-morphological changes of the liver and spleen. (author)
Barone, Sandro; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-01
Orthodontic treatments are usually performed using fixed brackets or removable oral appliances, which are traditionally made from alginate impressions and wax registrations. Among removable devices, eruption guidance appliances are used for early orthodontic treatments in order to intercept and prevent malocclusion problems. Commercially available eruption guidance appliances, however, are symmetric devices produced using a few standard sizes. For this reason, they are not able to meet all the specific patient's needs since the actual dental anatomies present various geometries and asymmetric conditions. In this article, a computer-aided design-based methodology for the design and manufacturing of a patient-specific eruption guidance appliances is presented. The proposed approach is based on the digitalization of several steps of the overall process: from the digital reconstruction of patients' anatomies to the manufacturing of customized appliances. A finite element model has been developed to evaluate the temporomandibular joint disks stress level caused by using symmetric eruption guidance appliances with different teeth misalignment conditions. The developed model can then be used to guide the design of a patient-specific appliance with the aim at reducing the patient discomfort. At this purpose, two different customization levels are proposed in order to face both arches and single tooth misalignment issues. A low-cost manufacturing process, based on an additive manufacturing technique, is finally presented and discussed.
Yang, J; Feng, H L
2018-04-09
With the rapid development of the chair-side computer aided design and computer aided manufacture (CAD/CAM) technology, its accuracy and operability of have been greatly improved in recent years. Chair-side CAD/CAM system may produce all kinds of indirect restorations, and has the advantages of rapid, accurate and stable production. It has become the future development direction of Stomatology. This paper describes the clinical application of the chair-side CAD/CAM technology for anterior aesthetic restorations from the aspects of shade and shape.
MHD equilibrium identification on ASDEX-Upgrade
International Nuclear Information System (INIS)
McCarthy, P.J.; Schneider, W.; Lakner, K.; Zehrfeld, H.P.; Buechl, K.; Gernhardt, J.; Gruber, O.; Kallenbach, A.; Lieder, G.; Wunderlich, R.
1992-01-01
A central activity accompanying the ASDEX-Upgrade experiment is the analysis of MHD equilibria. There are two different numerical methods available, both using magnetic measurements which reflect equilibrium states of the plasma. The first method proceeds via a function parameterization (FP) technique, which uses in-vessel magnetic measurements to calculate up to 66 equilibrium parameters. The second method applies an interpretative equilibrium code (DIVA) for a best fit to a different set of magnetic measurements. Cross-checks with the measured particle influxes from the inner heat shield and the divertor region and with visible camera images of the scrape-off layer are made. (author) 3 refs., 3 figs
Thermodynamic and transport properties of gaseous tetrafluoromethane in chemical equilibrium
Hunt, J. L.; Boney, L. R.
1973-01-01
Equations and in computer code are presented for the thermodynamic and transport properties of gaseous, undissociated tetrafluoromethane (CF4) in chemical equilibrium. The computer code calculates the thermodynamic and transport properties of CF4 when given any two of five thermodynamic variables (entropy, temperature, volume, pressure, and enthalpy). Equilibrium thermodynamic and transport property data are tabulated and pressure-enthalpy diagrams are presented.
Non-Equilibrium Properties from Equilibrium Free Energy Calculations
Pohorille, Andrew; Wilson, Michael A.
2012-01-01
Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.
Equilibrium and non equilibrium in fragmentation
International Nuclear Information System (INIS)
Dorso, C.O.; Chernomoretz, A.; Lopez, J.A.
2001-01-01
Full text: In this communication we present recent results regarding the interplay of equilibrium and non equilibrium in the process of fragmentation of excited finite Lennard Jones drops. Because the general features of such a potential resemble the ones of the nuclear interaction (fact that is reinforced by the similarity between the EOS of both systems) these studies are not only relevant from a fundamental point of view but also shed light on the problem of nuclear multifragmentation. We focus on the microscopic analysis of the state of the fragmenting system at fragmentation time. We show that the Caloric Curve (i e. the functional relationship between the temperature of the system and the excitation energy) is of the type rise plateau with no vapor branch. The usual rise plateau rise pattern is only recovered when equilibrium is artificially imposed. This result puts a serious question on the validity of the freeze out hypothesis. This feature is independent of the dimensionality or excitation mechanism. Moreover we explore the behavior of magnitudes which can help us determine the degree of the assumed phase transition. It is found that no clear cut criteria is presently available. (Author)
Directory of Open Access Journals (Sweden)
Jiulin Wang
2017-01-01
Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Chemical Principles Revisited: Chemical Equilibrium.
Mickey, Charles D.
1980-01-01
Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)
International Nuclear Information System (INIS)
Motamedi, Kambiz; Levine, Benjamin D.; Seeger, Leanne L.; McNitt-Gray, Michael F.
2014-01-01
To evaluate the success rate of a low-dose (50 % mAs reduction) computed tomography (CT) biopsy technique. This protocol was adopted based on other successful reduced-CT radiation dose protocols in our department, which were implemented in conjunction with quality improvement projects. The technique included a scout view and initial localizing scan with standard dose. Additional scans obtained for further guidance or needle adjustment were acquired by reducing the tube current-time product (mAs) by 50 %. The radiology billing data were searched for CT-guided musculoskeletal procedures performed over a period of 8 months following the initial implementation of the protocol. These were reviewed for the type of procedure and compliance with the implemented protocol. The compliant CT-guided biopsy cases were then retrospectively reviewed for patient demographics, tumor pathology, and lesion size. Pathology results were compared to the ultimate diagnoses and were categorized as diagnostic, accurate, or successful. Of 92 CT-guided procedures performed during this period, two were excluded as they were not biopsies (one joint injection and one drainage), 19 were excluded due to non-compliance (operators neglected to follow the protocol), and four were excluded due to lack of available follow-up in our electronic medical records. A total of 67 compliant biopsies were performed in 63 patients (two had two biopsies, and one had three biopsies). There were 32 males and 31 females with an average age of 50 (range, 15-84 years). Of the 67 biopsies, five were non-diagnostic and inaccurate and thus unsuccessful (7 %); five were diagnostic but inaccurate and thus unsuccessful (7 %); 57 were diagnostic and accurate thus successful (85 %). These results were comparable with results published in the radiology literature. The success rate of CT-guided biopsies using a low-dose protocol is comparable to published rates for conventional dose biopsies. The implemented low-dose protocol
Energy Technology Data Exchange (ETDEWEB)
Motamedi, Kambiz; Levine, Benjamin D.; Seeger, Leanne L.; McNitt-Gray, Michael F. [UCLA Health System, Radiology, Los Angeles, CA (United States)
2014-11-15
To evaluate the success rate of a low-dose (50 % mAs reduction) computed tomography (CT) biopsy technique. This protocol was adopted based on other successful reduced-CT radiation dose protocols in our department, which were implemented in conjunction with quality improvement projects. The technique included a scout view and initial localizing scan with standard dose. Additional scans obtained for further guidance or needle adjustment were acquired by reducing the tube current-time product (mAs) by 50 %. The radiology billing data were searched for CT-guided musculoskeletal procedures performed over a period of 8 months following the initial implementation of the protocol. These were reviewed for the type of procedure and compliance with the implemented protocol. The compliant CT-guided biopsy cases were then retrospectively reviewed for patient demographics, tumor pathology, and lesion size. Pathology results were compared to the ultimate diagnoses and were categorized as diagnostic, accurate, or successful. Of 92 CT-guided procedures performed during this period, two were excluded as they were not biopsies (one joint injection and one drainage), 19 were excluded due to non-compliance (operators neglected to follow the protocol), and four were excluded due to lack of available follow-up in our electronic medical records. A total of 67 compliant biopsies were performed in 63 patients (two had two biopsies, and one had three biopsies). There were 32 males and 31 females with an average age of 50 (range, 15-84 years). Of the 67 biopsies, five were non-diagnostic and inaccurate and thus unsuccessful (7 %); five were diagnostic but inaccurate and thus unsuccessful (7 %); 57 were diagnostic and accurate thus successful (85 %). These results were comparable with results published in the radiology literature. The success rate of CT-guided biopsies using a low-dose protocol is comparable to published rates for conventional dose biopsies. The implemented low-dose protocol
Equilibrium and non-equilibrium phenomena in arcs and torches
Mullen, van der J.J.A.M.
2000-01-01
A general treatment of non-equilibrium plasma aspects is obtained by relating transport fluxes to equilibrium restoring processes in so-called disturbed Bilateral Relations. The (non) equilibrium stage of a small microwave induced plasma serves as case study.
International Nuclear Information System (INIS)
Chen Tao; Ning Lixia; Liu Yuai; Li Ningyi; Chen Feng
2007-01-01
Objective: To compare the computed orthopantomography (COPT) with Shriller radiography(SR), film orthopantomography (FOPT) and other traditional radiographic techniques in the radiography of temporomandibular joint (TMJ). Methods: Ninty-eight cases were randomly divided into 3 groups, and the open and close positions of TMJs of both sides were examined with SR, FOPT, and COPT, respectively. The satisfactory rates of the X-ray pictures were statistically analyzed with Pearson chi-square in SPSS10.0, and the satisfactory rates were analyzed with q test between the groups. Results: One hundred and forty-four of the open and close positions of 144 TMJ pictures of the COPT group, 128 of 128 of the FOPT group, and 6 of 120 of the SR group were satisfactory in the mandible ramus of the TMJ, with satisfactory rate being 100%, 100%, and 5%, respectively (P 0.01), respectively between FOPT and COPT groups. The difference was not statistically significant. The exposure was as follows: COPT, 99-113 mAs; FOPT, 210-225 mAs; and SR, 48-75 mAs. Therefore, COPT and FOPT were superior to SR in the pictures of the mandible ramus, coronoid process, and incisure, but inferior in the joint space pictures. The satisfactory rates of the condylar process and articular tubercle were same in the 3 groups. The exposure of the FOPT group was greater than that of the COPT and SR groups. Conclusion: COPT is superior to SR and FOPT in TMJ radiography, and should be applied widely in the clinic. (authors)
Computer-controlled pneumatic pressure algometry--a new technique for quantitative sensory testing.
Polianskis, R; Graven-Nielsen, T; Arendt-Nielsen, L
2001-01-01
Hand-held pressure algometry usually assesses pressure-pain detection thresholds and provides little information on pressure-pain stimulus-response function. In this article, a cuff pressure algometry for advanced pressure-pain function evaluation is proposed. The experimental set-up consisted of a pneumatic tourniquet cuff, a computer-controlled air compressor and an electronic visual analogue scale (VAS) for constant pain intensity rating. Twelve healthy volunteers were included in the study. In the first part, hand-held algometry and cuff algometry were performed over the gastrocnemius muscle with constant compression rate. In the second part, the cuff algometry was performed with different compression rates to evaluate the influence of the compression rate on pain thresholds and other psychophysical data. Pressure-pain detection threshold (PDT), pain tolerance threshold (PTT), pain intensity, PDT-PTT time and other psychophysical variables were evaluated.Pressure-pain detection thresholds recorded over the gastrocnemius muscle with a hand-held and with a cuff algometer, were 482 +/- 19 kPa and 26 +/- 1.6 kPa, respectively. Pressure and pain intensities were correlated during cuff algometry. During increasing cuff compression, the subjective pain tolerance limit on VAS was 5.6 +/- 0.95 cm. There was a direct correlation between the number of compressions, the compression rate and pain thresholds. The cuff algometry technique is appropriate for pressure-pain stimulus-response studies. Cuff algometry allowed quantification of psychophysical response to the change of stimulus configuration. Copyright 2001 European Federation of Chapters of the International Association for the Study of Pain.
Validation of a low dose simulation technique for computed tomography images.
Directory of Open Access Journals (Sweden)
Daniela Muenzel
Full Text Available PURPOSE: Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT images from an original higher dose scan. MATERIALS AND METHODS: Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV of a swine were acquired (approved by the regional governmental commission for animal protection. Simulations of CT acquisition with a lower dose (simulated 10-80 mAs were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. RESULTS: Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2% and -0.2% (range -8.2% to 3.2%, respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise and 1.9-13.4 HU (CT values, without significant differences (p>0.05. Subjective observer evaluation of image appearance showed no visually detectable difference. CONCLUSION: Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.
International Nuclear Information System (INIS)
Kuhl, D.E.
1975-01-01
Progress is reported in the development of equipment and counting techniques for transverse section scanning of the brain following the administration of radiopharmaceuticals to evaluate regional blood flow. The scanning instrument has an array of 32 scintillation detectors that surround the head and scan data are analyzed using a small computer. (U.S.)
Directory of Open Access Journals (Sweden)
Katalin Martinás
2007-02-01
Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.
DIAGNOSIS OF FINANCIAL EQUILIBRIUM
Directory of Open Access Journals (Sweden)
SUCIU GHEORGHE
2013-04-01
Full Text Available The analysis based on the balance sheet tries to identify the state of equilibrium (disequilibrium that exists in a company. The easiest way to determine the state of equilibrium is by looking at the balance sheet and at the information it offers. Because in the balance sheet there are elements that do not reflect their real value, the one established on the market, they must be readjusted, and those elements which are not related to the ordinary operating activities must be eliminated. The diagnosis of financial equilibrium takes into account 2 components: financing sources (ownership equity, loaned, temporarily attracted. An efficient financial equilibrium must respect 2 fundamental requirements: permanent sources represented by ownership equity and loans for more than 1 year should finance permanent needs, and temporary resources should finance the operating cycle.
Analysis of the trend to equilibrium of a chemically reacting system
International Nuclear Information System (INIS)
Kremer, Gilberto M; Bianchi, Miriam Pandolfi; Soares, Ana Jacinta
2007-01-01
In this present paper, a quaternary gaseous reactive mixture, for which the chemical reaction is close to its final stage and the elastic and reactive frequencies are comparable, is modelled within the Boltzmann equation extended to reacting gases. The main objective is a detailed analysis of the non-equilibrium effects arising in the reactive system A 1 + A 2 ↔ A 3 + A 4 , in a flow regime which is considered not far away from thermal, mechanical and chemical equilibrium. A first-order perturbation solution technique is applied to the macroscopic field equations for the spatially homogeneous gas system, and the trend to equilibrium is studied in detail. Adopting elastic hard-spheres and reactive line-of-centres cross sections and an appropriate choice of the input distribution functions-which allows us to distinguish the two cases where the constituents are either at same or different temperatures-explicit computations of the linearized production terms for mass, momentum and total energy are performed for each gas species. The departures from the equilibrium states of densities, temperatures and diffusion fluxes are characterized by small perturbations of their corresponding equilibrium values. For the hydrogen-chlorine system, the perturbations are plotted as functions of time for both cases where the species are either at the same or different temperatures. Moreover, the trend to equilibrium of the reaction rates is represented for the forward and backward reaction H 2 + Cl ↔ HCl + H
Equilibrium statistical mechanics
Mayer, J E
1968-01-01
The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t
Determination of gross plasma equilibrium from magnetic multipoles
Energy Technology Data Exchange (ETDEWEB)
Kessel, C.E.
1986-05-01
A new approximate technique to determine the gross plasma equilibrium parameters, major radius, minor radius, elongation and triangularity for an up-down symmetric plasma is developed. It is based on a multipole representation of the externally applied poloidal magnetic field, relating specific terms to the equilibrium parameters. The technique shows reasonable agreement with free boundary MHD equilibrium results. The method is useful in dynamic simulation and control studies.
Determination of gross plasma equilibrium from magnetic multipoles
International Nuclear Information System (INIS)
Kessel, C.E.
1986-05-01
A new approximate technique to determine the gross plasma equilibrium parameters, major radius, minor radius, elongation and triangularity for an up-down symmetric plasma is developed. It is based on a multipole representation of the externally applied poloidal magnetic field, relating specific terms to the equilibrium parameters. The technique shows reasonable agreement with free boundary MHD equilibrium results. The method is useful in dynamic simulation and control studies
Directory of Open Access Journals (Sweden)
Mario Linares Vásquez
2008-01-01
Full Text Available Selecting an investment portfolio has inspired several models aimed at optimising the set of securities which an in-vesttor may select according to a number of specific decision criteria such as risk, expected return and planning hori-zon. The classical approach has been developed for supporting the two stages of portfolio selection and is supported by disciplines such as econometrics, technical analysis and corporative finance. However, with the emerging field of computational finance, new and interesting techniques have arisen in line with the need for the automatic processing of vast volumes of information. This paper surveys such new techniques which belong to the body of knowledge con-cerning computing and systems engineering, focusing on techniques particularly aimed at producing beliefs regar-ding investment portfolios.
Directory of Open Access Journals (Sweden)
C. Fountoukis
2007-09-01
Full Text Available This study presents ISORROPIA II, a thermodynamic equilibrium model for the K+–Ca2+–Mg2+–NH4+–Na+–SO42−–NO3−–Cl−–H2O aerosol system. A comprehensive evaluation of its performance is conducted against water uptake measurements for laboratory aerosol and predictions of the SCAPE2 thermodynamic module over a wide range of atmospherically relevant conditions. The two models agree well, to within 13% for aerosol water content and total PM mass, 16% for aerosol nitrate and 6% for aerosol chloride and ammonium. Largest discrepancies were found under conditions of low RH, primarily from differences in the treatment of water uptake and solid state composition. In terms of computational speed, ISORROPIA II was more than an order of magnitude faster than SCAPE2, with robust and rapid convergence under all conditions. The addition of crustal species does not slow down the thermodynamic calculations (compared to the older ISORROPIA code because of optimizations in the activity coefficient calculation algorithm. Based on its computational rigor and performance, ISORROPIA II appears to be a highly attractive alternative for use in large scale air quality and atmospheric transport models.
International Nuclear Information System (INIS)
Oliveira, Andre Felipe da Silva de
2012-01-01
Safety is one of the most important and desirable characteristics in a nuclear plant Natural circulation cooling systems are noted for providing passive safety. These systems can be used as mechanism for removing the residual heat from the reactor, or even as the main cooling system for heated sections, such as the core. In this work, a computational fluid dynamics (CFD) code called CFX is used to simulate the process of natural circulation in a research reactor pool after its shutdown. The physical model studied is similar to the Open Pool Australian Light water reactor (OPAL), and contains the core, cooling pool, reflecting tank, circulation pipes and chimney. For best computing performance, the core region was modeled as a porous medium, where the parameters were obtained from a separately detailed CFD analysis. This work also aims to study the viability of the implementation of Differential Evolution algorithm for optimization the physical and operational parameters that, obeying the laws of similarity, lead to a test section on a reduced scale of the reactor pool.
Local equilibrium in bird flocks
Mora, Thierry; Walczak, Aleksandra M.; Del Castello, Lorenzo; Ginelli, Francesco; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano; Cavagna, Andrea; Giardina, Irene
2016-12-01
The correlated motion of flocks is an example of global order emerging from local interactions. An essential difference with respect to analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. This non-equilibrium characteristic has been studied theoretically, but its impact on actual animal groups remains to be fully explored experimentally. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accommodates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment occurs on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems.
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.
Directory of Open Access Journals (Sweden)
Koichi Kobayashi
2013-01-01
Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.
Zhang, Zhi-cheng; Sun, Tian-sheng; Li, Fang; Tang, Guo-lin
2009-05-19
To explore the effect of CAD and CAE related technique in separation of Pygopagus Conjoined Twins. CT images of Pygopagus conjoined twins were obtained and reconstructed in three-dimensional by Mimics software. 3D entity model of skin and spine of conjoined twins were made by fast plastic technique and equipment according to 3D data model. The circumference and area of fused and independent dural sac were measured by software of AutoCAD. The entity model is real reflection of skin and spine of Pygopagus. It was used in the procedures of discussion, sham operation, skin flap design and informed consent. In the measure of MRI, the circumference and area of fused dural sac was more than of independent dural sac, that is to say, the defect of dural sac can be repaired by direct suture. The intraoperative finding match with imaging measure results. The application of CAD and CAE in the procedure of preoperative plan have gave big help to successful separation of Pygopagus Conjoined Twins.
Draft of diagnostic techniques for primary coolant circuit facilities using control computer
International Nuclear Information System (INIS)
Suchy, R.; Procka, V.; Murin, V.; Rybarova, D.
A method is proposed of in-service on-line diagnostics of primary circuit selected parts by means of a control computer. Computer processing will involve the measurements of neutron flux, pressure difference in pumps and in the core, and the vibrations of primary circuit mechanical parts. (H.S.)
Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.
2007-01-01
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the
Using Animation to Support the Teaching of Computer Game Development Techniques
Taylor, Mark John; Pountney, David C.; Baskett, M.
2008-01-01
In this paper, we examine the potential use of animation for supporting the teaching of some of the mathematical concepts that underlie computer games development activities, such as vector and matrix algebra. An experiment was conducted with a group of UK undergraduate computing students to compare the perceived usefulness of animated and static…
International Nuclear Information System (INIS)
1979-01-01
Goal of this workshop was to provide an introduction to the use of state-of-the-art computer codes for the semi-empirical and ab initio computation of the electronic structure and geometry of small and large molecules. The workshop consisted of 15 lectures on the theoretical foundations of the codes, followed by laboratory sessions which utilized these codes
van Herwaarden, Onno A.; Gielen, Joseph L. W.
2002-01-01
Focuses on students showing a lack of conceptual insight while using computer algebra systems (CAS) in the setting of an elementary calculus and linear algebra course for first year university students in social sciences. The use of a computer algebra environment has been incorporated into a more traditional course but with special attention on…
Patra, S. R.
2017-12-01
minimization principle. The reliability of these computational models was analysed in light of simulation results and it was found out that SVM model produces better results among the three. The future research should be routed to extend the validation data set and to check the validity of our results on different areas with hybrid intelligence techniques.
Kanat, Burcu; Cömlekoğlu, Erhan M; Dündar-Çömlekoğlu, Mine; Hakan Sen, Bilge; Ozcan, Mutlu; Ali Güngör, Mehmet
2014-08-01
The objectives of this study were to evaluate the fracture resistance (FR), flexural strength (FS), and shear bond strength (SBS) of zirconia framework material veneered with different methods and to assess the stress distributions using finite element analysis (FEA). Zirconia frameworks fabricated in the forms of crowns for FR, bars for FS, and disks for SBS (N = 90, n = 10) were veneered with either (a) file splitting (CAD-on) (CD), (b) layering (L), or (c) overpressing (P) methods. For crown specimens, stainless steel dies (N = 30; 1 mm chamfer) were scanned using the labside contrast spray. A bilayered design was produced for CD, whereas a reduced design (1 mm) was used for L and P to support the veneer by computer-aided design and manufacturing. For bar (1.5 × 5 × 25 mm(3) ) and disk (2.5 mm diameter, 2.5 mm height) specimens, zirconia blocks were sectioned under water cooling with a low-speed diamond saw and sintered. To prepare the suprastructures in the appropriate shapes for the three mechanical tests, nano-fluorapatite ceramic was layered and fired for L, fluorapatite-ceramic was pressed for P, and the milled lithium-disilicate ceramics were fused with zirconia by a thixotropic glass ceramic for CD and then sintered for crystallization of veneering ceramic. Crowns were then cemented to the metal dies. All specimens were stored at 37°C, 100% humidity for 48 hours. Mechanical tests were performed, and data were statistically analyzed (ANOVA, Tukey's, α = 0.05). Stereomicroscopy and scanning electron microscopy (SEM) were used to evaluate the failure modes and surface structure. FEA modeling of the crowns was obtained. Mean FR values (N ± SD) of CD (4408 ± 608) and L (4323 ± 462) were higher than P (2507 ± 594) (p mechanical tests, whereas a layering technique increased the FR when an anatomical core design was employed. File splitting (CAD-on) or layering veneering ceramic on zirconia with a reduced framework design may reduce ceramic chipping