Sample records for equilibrium computation technique

  1. Computational methods for reversed-field equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.


    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described.

  2. Computation of Phase Equilibrium and Phase Envelopes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp

    and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we...

  3. Computer techniques for electromagnetics

    CERN Document Server

    Mittra, R


    Computer Techniques for Electromagnetics discusses the ways in which computer techniques solve practical problems in electromagnetics. It discusses the impact of the emergence of high-speed computers in the study of electromagnetics. This text provides a brief background on the approaches used by mathematical analysts in solving integral equations. It also demonstrates how to use computer techniques in computing current distribution, radar scattering, and waveguide discontinuities, and inverse scattering. This book will be useful for students looking for a comprehensive text on computer techni

  4. Computing stationary distributions in equilibrium and non-equilibrium systems with Forward Flux Sampling

    NARCIS (Netherlands)

    Valeriani, C.; Allen, R.J.; Morelli, M.J.; Frenkel, D.; ten Wolde, P.R.


    We present a method for computing stationary distributions for activated processes in equilibrium and non-equilibrium systems using Forward Flux Sampling (FFS). In this method, the stationary distributions are obtained directly from the rate constant calculations for the forward and backward

  5. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn


    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Computer simulations of equilibrium magnetization and microstructure in magnetic fluids (United States)

    Rosa, A. P.; Abade, G. C.; Cunha, F. R.


    In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole

  8. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.


    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...

  9. Comparative Analysis of Emerging Green Certificate Markets from a Computable General Equilibrium Perspective

    Directory of Open Access Journals (Sweden)

    Cristina GALALAE


    Full Text Available Whether using market mechanisms to allocate green certificates in various countries is an optimal solution for stimulating green electricity production represents a question proposed by numerous recent comparative analyses, with opinions being split. Our paper proposes a differing perspective, employing modern computational economics techniques in order to study if general equilibrium is achievable, nationally and internationally, and how it compares with the non-market steady state. We analyse the field, determining exogenous and endogenous factors of influence that we cast into functional relationships via econometric estimation. Subsequently, we study four multi-period general equilibrium models, recursive and non-recursive, solving the latter ones via a Johansen/Euler method for simultaneous all-year computation. General equilibrium is shown to be achievable but dependent on country specific conditions, with optimality being relative in a globalised context. In closing, we present a case study focused on providing useful guidelines for future international marketing efforts in this domain.

  10. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis (United States)

    Gordon, Sanford; Mcbride, Bonnie J.


    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  11. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts. (United States)

    Heald, Emerson F.


    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  12. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D


    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  13. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  14. Generalized multivalued equilibrium-like problems: auxiliary principle technique and predictor-corrector methods

    Directory of Open Access Journals (Sweden)

    Vahid Dadashi


    Full Text Available Abstract This paper is dedicated to the introduction a new class of equilibrium problems named generalized multivalued equilibrium-like problems which includes the classes of hemiequilibrium problems, equilibrium-like problems, equilibrium problems, hemivariational inequalities, and variational inequalities as special cases. By utilizing the auxiliary principle technique, some new predictor-corrector iterative algorithms for solving them are suggested and analyzed. The convergence analysis of the proposed iterative methods requires either partially relaxed monotonicity or jointly pseudomonotonicity of the bifunctions involved in generalized multivalued equilibrium-like problem. Results obtained in this paper include several new and known results as special cases.

  15. Computer animation algorithms and techniques

    CERN Document Server

    Parent, Rick


    Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ

  16. Computational intelligence techniques in bioinformatics. (United States)

    Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I


    Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Computation of the effect of Donnan equilibrium on pH in equilibrium dialysis. (United States)

    Mapleson, W W


    In equilibrium dialysis, with a nondiffusible, charged protein on one side of the membrane, Donnan equilibrium leads to a pH difference across the membrane. Therefore, with an ionizable drug, the concentration dissolved in water will be different on the two sides of the membrane to an extent dependent on the pH difference and the pKa of the drug. This must be allowed for in calculating the concentration bound to protein. This paper develops, with certain restrictions, a method of calculating the pH difference when the solutions contain electrolytes, acids, including CO2, and proteins. The method is applicable to pH differences across passive cell membranes.

  18. Computer enhancement through interpretive techniques (United States)

    Foster, G.; Spaanenburg, H. A. E.; Stumpf, W. E.


    The improvement in the usage of the digital computer through the use of the technique of interpretation rather than the compilation of higher ordered languages was investigated by studying the efficiency of coding and execution of programs written in FORTRAN, ALGOL, PL/I and COBOL. FORTRAN was selected as the high level language for examining programs which were compiled, and A Programming Language (APL) was chosen for the interpretive language. It is concluded that APL is competitive, not because it and the algorithms being executed are well written, but rather because the batch processing is less efficient than has been admitted. There is not a broad base of experience founded on trying different implementation strategies which have been targeted at open competition with traditional processing methods.

  19. Computing stationary distributions in equilibrium and nonequilibrium systems with forward flux sampling

    NARCIS (Netherlands)

    Valeriani, C.; Allen, R.J.; Morelli, M.J.; Frenkel, D.; Wolde, P.R. ten


    We present a method for computing stationary distributions for activated processes in equilibrium and nonequilibrium systems using forward flux sampling. In this method, the stationary distributions are obtained directly from the rate constant calculations for the forward and backward

  20. Computations of fluid mixtures including solid carbon at chemical equilibrium (United States)

    Bourasseau, Emeric


    One of the key points of the understanding of detonation phenomena is the determination of equation of state of the detonation products mixture. Concerning carbon rich explosives, detonation products mixtures are composed of solid carbon nano-clusters immersed in a high density fluid phase. The study of such systems where both chemical and phase equilibriums occur simultaneously represents an important challenge and molecular simulation methods appear to be one of the more promising way to obtain some answers. In this talk, the Reaction Ensemble Monte Carlo (RxMC) method will be presented. This method allows the system to reach the chemical equilibrium of a mixture driven by a set of linearly independent chemical equations. Applied to detonation product mixtures, it allows the calculation of the chemical composition of the mixture and its thermodynamic properties. Moreover, an original model has been proposed to take explicitly into account a solid carbon meso-particle in thermodynamic and chemical equilibrium with the fluid. Finally our simulations show that the intrinsic inhomogeneous nature of the system (i.e. the fact that the solid phase is immersed in the fluid phase) has an important impact on the thermodynamic properties, and as a consequence must be taken into account.

  1. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang


    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  2. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry


    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  3. Computation of air chemical equilibrium composition until 30000K - Part I

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Rocha Pimentel


    Full Text Available An algorithm was developed to obtain the air chemical equilibrium composition. The air was considered to be composed of 79% N2 and 21% O2, and the models are 5 chemical species, N2, O2, NO, N, O, NO+, e-, respectively. The air chemical equilibrium composition is obtained through the equilibrium constants method and it was used the Absolute Newton method for convergence.The algorithm can be coupled as a subroutine into a Computational Fluid Dynamics code, given the flow field over an atmosphere reentry vehicle where, due to high velocities, dissociative chemical reactions and air ionization can occur. This work presents results of air chemical equilibrium composition for pressures of 1, 5, 10, 50 and 100 atm in a temperature range from 300 to 30000K.

  4. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.


    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  5. Performing an Environmental Tax Reform in a regional Economy. A Computable General Equilibrium

    NARCIS (Netherlands)

    Andre, F.J.; Cardenete, M.A.; Velazquez, E.


    We use a Computable General Equilibrium model to simulate the effects of an Environmental Tax Reform in a regional economy (Andalusia, Spain).The reform involves imposing a tax on CO2 or SO2 emissions and reducing either the Income Tax or the payroll tax of employers to Social Security, and

  6. Macroeconomic effects of CO2 emission limits : A computable general equilibrium analysis for China

    NARCIS (Netherlands)

    Zhang, ZX

    The study analyzes the macroeconomic effects of limiting China's CO2 emissions by using a time-recursive dynamic computable general equilibrium (CGE) model of the Chinese economy. The baseline scenario for the Chinese economy over the period to 2010 is first developed under a set of assumptions

  7. Computer Simulation of the Toroidal Equilibrium and Stability of a Plasma in Three Dimensions (United States)

    Betancourt, Octavio; Garabedian, Paul


    A computer program has been written to solve the equations for sharp boundary magnetohydrodynamic equilibrium of a toroidal plasma in three dimensions without restriction to axial symmetry. The numerical method is based on a variational principle that indicates whether the equilibria obtained are stable. Applications have been made to Tokamak, Stellarator, and Scyllac configurations. PMID:16592233

  8. Statistical and Computational Techniques in Manufacturing

    CERN Document Server


    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  9. What is the real role of the equilibrium phase in abdominal computed tomography?

    Energy Technology Data Exchange (ETDEWEB)

    Salvadori, Priscila Silveira [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Costa, Danilo Manuel Cerqueira; Romano, Ricardo Francisco Tavares; Galvao, Breno Vitor Tomaz; Monjardim, Rodrigo da Fonseca; Bretas, Elisa Almeida Sathler; Rios, Lucas Torres; Shigueoka, David Carlos; Caldana, Rogerio Pedreschi; D' Ippolito, Giuseppe, E-mail: [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Department of Diagnostic Imaging


    Objective: To evaluate the role of the equilibrium phase in abdominal computed tomography. Materials and Methods: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis), and a second reading of these phases added to the equilibrium phase (second analysis). At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher's test was utilized to evaluate the changes in the primary diagnosis (p < 0.05 as significant). Results: Among the 219 cases reviewed, the absence of the equilibrium phase determined change in the primary diagnosis in only one case (0.46%; p > 0.999). As regards secondary diagnoses, changes after the second analysis were observed in five cases (2.3%). Conclusion: For clinical scenarios such as cancer staging, acute abdomen and investigation for abdominal collections, the equilibrium phase is dispensable and does not offer any significant diagnostic contribution. (author)

  10. Computing a quasi-perfect equilibrium of a two-player game

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre


    Refining an algorithm due to Koller, Megiddo and von Stengel, we show how to apply Lemke's algorithm for solving linear complementarity programs to compute a quasi-perfect equilibrium in behavior strategies of a given two-player extensive-form game of perfect recall. A quasi-perfect equilibrium...... of a zero-sum game, we devise variants of the algorithm that rely on linear programming rather than linear complementarity programming and use the simplex algorithm or other algorithms for linear programming rather than Lemke's algorithm. We argue that these latter algorithms are relevant for recent...

  11. Equilibrium selection in alternating-offers bargaining models: the evolutionary computing approach

    NARCIS (Netherlands)

    D.D.B. van Bragt; E.H. Gerding (Enrico); J.A. La Poutré (Han)


    textabstractA systematic validation of evolutionary techniques in the field of bargaining is presented. For this purpose, the dynamic and equilibrium-selecting behavior of a multi-agent system consisting of adaptive bargaining agents is investigated. The agents' bargaining strategies are updated by


    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández


    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  13. Measurement Error with Different Computer Vision Techniques (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.


    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  14. A Comparison of the Computation Times of Thermal Equilibrium and Non-equilibrium Models of Droplet Field in a Two-Fluid Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Cho, Heong Kyu; Kim, Jong Tae; Yoon, Han Young; Jeong, Jae Jun


    A computational model for transient, 3 dimensional 2 phase flows was developed by using 'unstructured-FVM-based, non-staggered, semi-implicit numerical scheme' considering the thermally non-equilibrium droplets. The assumption of the thermally equilibrium between liquid and droplets of previous studies was not used any more, and three energy conservation equations for vapor, liquid, liquid droplets were set up. Thus, 9 conservation equations for mass, momentum, and energy were established to simulate 2 phase flows. In this report, the governing equations and a semi-implicit numerical sheme for a transient 1 dimensional 2 phase flows was described considering the thermally non-equilibrium between liquid and liquid droplets. The comparison with the previous model considering the thermally non-equilibrium between liquid and liquid droplets was also reported.

  15. New coding technique for computer generated holograms. (United States)

    Haskell, R. E.; Culver, B. C.


    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  16. Marginal Cost of Public Funds and Regulatory Regimes: Computable General Equilibrium Evaluation for Argentina


    Chisari, Omar O.; Martin Cicowiez


    We estimate the Marginal Cost of Public Funds for Argentina using a Computable General Equilibrium (CGE) model, assessing the sensitivity of the results to the existence of alternative regulatory regimes (price-cap and cost-plus) for public utilities subject to regulation. The estimates are in the range of international studies, and we confirm that the results are sensitive to the regulatory regime, to the presence of exempted goods, the existence of unemployment, the elasticity of labor supp...

  17. Computational techniques of the simplex method

    CERN Document Server

    Maros, István


    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  18. Computational intelligence techniques in health care

    CERN Document Server

    Zhou, Wengang; Satheesh, P


    This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.

  19. An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM) (United States)

    Pratt, B. S.; Pratt, D. T.


    A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.

  20. Computing multi-species chemical equilibrium with an algorithm based on the reaction extents

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.


    A mathematical model for the solution of a set of chemical equilibrium equations in a multi-species and multiphase chemical system is described. The computer-aid solution of model is achieved by means of a Newton-Raphson method enhanced with a line-search scheme, which deals with the non-negative......A mathematical model for the solution of a set of chemical equilibrium equations in a multi-species and multiphase chemical system is described. The computer-aid solution of model is achieved by means of a Newton-Raphson method enhanced with a line-search scheme, which deals with the non......-negative constrains. The residual function, representing the distance to the equilibrium, is defined from the chemical potential (or Gibbs energy) of the chemical system. Local minimums are potentially avoided by the prioritization of the aqueous reactions with respect to the heterogeneous reactions. The formation...... and release of gas bubbles is taken into account in the model, limiting the concentration of volatile aqueous species to a maximum value, given by the gas solubility constant.The reaction extents are used as state variables for the numerical method. As a result, the accepted solution satisfies the charge...

  1. Computational Intelligence Techniques for New Product Design

    CERN Document Server

    Chan, Kit Yan; Dillon, Tharam S


    Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...

  2. A new algorithm to compute conjectured supply function equilibrium in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Diaz, Cristian A.; Villar, Jose; Campos, Fco Alberto [Institute for Research in Technology, Technical School of Engineering, Comillas Pontifical University, Madrid 28015 (Spain); Rodriguez, M. Angel [Endesa, 28042 Madrid (Spain)


    Several types of market equilibria approaches, such as Cournot, Conjectural Variation (CVE), Supply Function (SFE) or Conjectured Supply Function (CSFE) have been used to model electricity markets for the medium and long term. Among them, CSFE has been proposed as a generalization of the classic Cournot. It computes the equilibrium considering the reaction of the competitors against changes in their strategy, combining several characteristics of both CVE and SFE. Unlike linear SFE approaches, strategies are linearized only at the equilibrium point, using their first-order Taylor approximation. But to solve CSFE, the slope or the intercept of the linear approximations must be given, which has been proved to be very restrictive. This paper proposes a new algorithm to compute CSFE. Unlike previous approaches, the main contribution is that the competitors' strategies for each generator are initially unknown (both slope and intercept) and endogenously computed by this new iterative algorithm. To show the applicability of the proposed approach, it has been applied to several case examples where its qualitative behavior has been analyzed in detail. (author)

  3. Computing Nash Equilibrium in Wireless Ad Hoc Networks: A Simulation-Based Approach

    Directory of Open Access Journals (Sweden)

    Peter Bulychev


    Full Text Available This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem using Statistical Model Checking. The method has been implemented in UPPAAL model checker and has been applied to the analysis of Aloha CSMA/CD and IEEE 802.15.4 CSMA/CA protocols.

  4. Performing an Environmental Tax Reform in a regional Economy. A Computable General Equilibrium


    Andre, F.J.; Cardenete, M.A.; Velazquez, E.


    We use a Computable General Equilibrium model to simulate the effects of an Environmental Tax Reform in a regional economy (Andalusia, Spain).The reform involves imposing a tax on CO2 or SO2 emissions and reducing either the Income Tax or the payroll tax of employers to Social Security, and eventually keeping public deficit unchanged.This approach enables us to test the so-called double dividend hypothesis, which states that this kind of reform is likely to improve both environmental and non-...

  5. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego


    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  6. Computed tomography urography technique, indications and limitations. (United States)

    Morcos, Sameh K


    The review discusses the different techniques of computed tomography urography reported in the literature and presents the author's preferred approach. Multiphase computed tomography urography offers a comprehensive evaluation of the urinary tract but at the cost of a large dose of contrast medium (100-150 ml), high radiation dose and massive number of images for interpretation. Diuresis induced by frusemide (10 mg) is reported to improve the depiction of ureters in the excretory phase of the examination. The author's preferred approach is a limited computed tomography urography which includes precontrast scanning of the kidneys, followed by an excretory phase 5 min after intravenous injection of 50 ml of contrast medium and 10 mg of frusemide. This limited examination in the author's experience provides a satisfactory evaluation of the urinary tract in the majority of patients, without inflicting a high radiation dose on the patient. A limited computed tomography urography examination is adequate for the majority of patients requiring excretory urography and a superior replacement of conventional intravenous urography. Information provided by a multiphase computed tomography urography examination is beneficial only in a small number of patients.

  7. Computer Vision Techniques for Transcatheter Intervention. (United States)

    Zhao, Feng; Xie, Xianghua; Roach, Matthew


    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area.

  8. Soft computing techniques in voltage security analysis

    CERN Document Server

    Chakraborty, Kabir


    This book focuses on soft computing techniques for enhancing voltage security in electrical power networks. Artificial neural networks (ANNs) have been chosen as a soft computing tool, since such networks are eminently suitable for the study of voltage security. The different architectures of the ANNs used in this book are selected on the basis of intelligent criteria rather than by a “brute force” method of trial and error. The fundamental aim of this book is to present a comprehensive treatise on power system security and the simulation of power system security. The core concepts are substantiated by suitable illustrations and computer methods. The book describes analytical aspects of operation and characteristics of power systems from the viewpoint of voltage security. The text is self-contained and thorough. It is intended for senior undergraduate students and postgraduate students in electrical engineering. Practicing engineers, Electrical Control Center (ECC) operators and researchers will also...

  9. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA (United States)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads


    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF ;post-processing; code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  10. Economic Assessment of Correlated Energy-Water Impacts using Computable General Equilibrium Modeling (United States)

    Qiu, F.; Andrew, S.; Wang, J.; Yan, E.; Zhou, Z.; Veselka, T.


    Many studies on energy and water are rightfully interested in the interaction of water and energy, and their projected dependence into the future. Water is indeed an essential input to the power sector currently, and energy is required to pump water for end use in either household consumption or in industrial uses. However, each presented study either qualitatively discusses the issues, particularly about how better understanding the interconnectedness of the system is paramount in getting better policy recommendations, or considers a partial equilibrium framework where water use and energy use changes are considered explicitly without thought to other repercussions throughout the regional/national/international economic landscapes. While many studies are beginning to ask the right questions, the lack of numerical rigor raises questions of concern in conclusions discerned. Most use life cycle analysis as a method for providing numerical results, though this lacks the flexibility that economics can provide. In this study, we will perform economic analysis using computable general equilibrium models with energy-water interdependencies captured as an important factor. We atempt to answer important and interesting questions in the studies: how can we characterize the economic choice of energy technology adoptions and their implications on water use in the domestic economy. Moreover, given predictions of reductions in rain fall in the near future, how does this impact the water supply in the midst of this energy-water trade-off?

  11. A computable general equilibrium assessment of the impact of illegal immigration on the Greek economy. (United States)

    Sarris, A H; Zografakis, S


    This paper presents a theoretical and empirical analysis of the impact of illegal immigrants on the small type economy of Greece by using the multisectoral computable general equilibrium model. The theoretical analysis utilizes a model showing that there is no equivocal case for illegal immigration leading to the decline in the real wages of unskilled labor and increases in the real wages of skilled labor. The empirical analysis uses an applied general equilibrium model for Greece, showing that the inflow of illegal immigrants has resulted in declines of the real disposable incomes of two classes of households, namely, those headed by an unskilled person, and those belonging to the poor and middle class income bracket. The results, on the other hand, showed that the large increase in the influx of illegal immigrants is macroeconomically beneficial, having significant adverse distribution implications when flexible wage adjustment is assumed in various labor markets. It appears that unskilled and hired agricultural workers are among those that are severely affected by the inflow of illegal workers. The results also appear to be fairly sensitive with respect to the elasticities of labor supply and demand, while they appear to be quite insensitive to the elasticity of substitution in import demand and export supply. Furthermore, it is also insensitive to the various parameters concerning the structure of the illegal labor market such as the amount of wage differential between illegal and domestic unskilled labor as well as the monetary amounts that illegal laborers remit abroad.

  12. Hybrid computer techniques for solving partial differential equations (United States)

    Hammond, J. L., Jr.; Odowd, W. M.


    Techniques overcome equipment limitations that restrict other computer techniques in solving trivial cases. The use of curve fitting by quadratic interpolation greatly reduces required digital storage space.

  13. Soil-water characteristics of Gaomiaozi bentonite by vapour equilibrium technique

    Directory of Open Access Journals (Sweden)

    Wenjing Sun


    Full Text Available Soil-water characteristics of Gaomiaozi (GMZ Ca-bentonite at high suctions (3–287 MPa are measured by vapour equilibrium technique. The soil-water retention curve (SWRC of samples with the same initial compaction states is obtained in drying and wetting process. At high suctions, the hysteresis behaviour is not obvious in relationship between water content and suction, while the opposite holds between degree of saturation and suction. The suction variation can change its water retention behaviour and void ratio. Moreover, changes of void ratio can bring about changes in degree of saturation. Therefore, the total change in degree of saturation includes changes caused by suction and that by void ratio. In the space of degree of saturation and suction, the SWRC at constant void ratio shifts to the direction of higher suctions with decreasing void ratio. However, the relationship between water content and suction is less affected by changes of void ratio. The degree of saturation decreases approximately linearly with increasing void ratio at a constant suction. Moreover, the slope of the line decreases with increasing suction and they show an approximately linear relationship in semi-logarithmical scale. From this linear relationship, the variation of degree of saturation caused by the change in void ratio can be obtained. Correspondingly, SWRC at a constant void ratio can be determined from SWRC at different void ratios.

  14. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Boero, Riccardo [Los Alamos National Laboratory; Edwards, Brian Keith [Los Alamos National Laboratory


    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying the event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.

  15. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)


    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  16. Computer simulation, nuclear techniques and surface analysis

    Directory of Open Access Journals (Sweden)

    Reis, A. D.


    Full Text Available This article is about computer simulation and surface analysis by nuclear techniques, which are non-destructive. The “energy method of analysis” for nuclear reactions is used. Energy spectra are computer simulated and compared with experimental data, giving target composition and concentration profile information. Details of prediction stages are given for thick flat target yields. Predictions are made for non-flat targets having asymmetric triangular surface contours. The method is successfully applied to depth profiling of 12C and 18O nuclei in thick targets, by deuteron (d,p and proton (p,α induced reactions, respectively.

    Este artículo trata de simulación por ordenador y del análisis de superficies mediante técnicas nucleares, que son no destructivas. Se usa el “método de análisis en energía” para reacciones nucleares. Se simulan en ordenador espectros en energía que se comparan con datos experimentales, de lo que resulta la obtención de información sobre la composición y los perfiles de concentración de la muestra. Se dan detalles de las etapas de las predicciones de espectros para muestras espesas y planas. Se hacen predicciones para muestras no planas que tienen contornos superficiales triangulares asimétricos. Este método se aplica con éxito en el cálculo de perfiles en profundidad de núcleos de 12C y de 18O en muestras espesas a través de reacciones (d,p y (p,α inducidas por deuterones y protones, respectivamente.

  17. Assessing economic impacts of China’s water pollution mitigation measures through a dynamic computable general equilibrium analysis.

    NARCIS (Netherlands)

    Qin, Changbo; Qin, Changbo; Bressers, Johannes T.A.; Su, Zhongbo; Jia, Yangwen; wang, Hao


    In this letter, we apply an extended environmental dynamic computable general equilibrium model to assess the economic consequences of implementing a total emission control policy. On the basis of emission levels in 2007, we simulate different emission reduction scenarios, ranging from 20 to 50%

  18. China’s Rare Earths Supply Forecast in 2025: A Dynamic Computable General Equilibrium Analysis

    Directory of Open Access Journals (Sweden)

    Jianping Ge


    Full Text Available The supply of rare earths in China has been the focus of significant attention in recent years. Due to changes in regulatory policies and the development of strategic emerging industries, it is critical to investigate the scenario of rare earth supplies in 2025. To address this question, this paper constructed a dynamic computable equilibrium (DCGE model to forecast the production, domestic supply, and export of China’s rare earths in 2025. Based on our analysis, production will increase by 10.8%–12.6% and achieve 116,335–118,260 tons of rare-earth oxide (REO in 2025, based on recent extraction control during 2011–2016. Moreover, domestic supply and export will be 75,081–76,800 tons REO and 38,797–39,400 tons REO, respectively. The technological improvements on substitution and recycling will significantly decrease the supply and mining activities of rare earths. From a policy perspective, we found that the elimination of export regulations, including export quotas and export taxes, does have a negative impact on China’s future domestic supply of rare earths. The policy conflicts between the increase in investment in strategic emerging industries, and the increase in resource and environmental taxes on rare earths will also affect China’s rare earths supply in the future.


    Namazu, Michiko; Fujimori, Shinichiro; Matsuoka, Yuzuru

    In this study, a recursive dynamic Computable General Equilibrium (CGE) model which can deal with Greenhouse Gas (GHG) constraint is applied to Japan. Based on several references, Japan's emissions reduction targets are determined as 25% reduction from 1990 level by 2020 and 80% reduction from 2005 level by 2050 in this study. Several cases with different scenarios for nuclear power plant, international emissions trading, and CO2 Capture and Strage (CCS) technology are simulated using the CGE model. By comparison among the results of each case, the effects, especially economic effects are evaluated and analyzed quantitatively. The results show that the most important counter measure to achieve GHG emissions reduction targets in Japan is whether Japan joins international emissions trading or not. In a no-trading case, in which GHG emissions constraints are imposed and Japan does not participate to the trading, GHG reduction costs reach 2,560 USD/tCO2-eq,yr (2005 price) in 2050. In addition, Gross Domestic Product (GDP) decreases 3.8% compared with a counter measure case in which GHG constraints are imposed but the emissions trading is allowed. The results also show that in case Japan targets no nuclear power plants in 2050, CCS technology and emissions trading are able to make up for the gap resulted from the nuclear power decrease. About the speed of CCS technology introduction, the share of power plants with CCS technology is changed depended on the speed; however, GDP and GHG reduction costs do not affected so much.

  20. The economic impacts of the September 11 terrorist attacks: a computable general equilibrium analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois; Asay, Gary [University of Southern California


    This paper develops a bottom-up approach that focuses on behavioral responses in estimating the total economic impacts of the September 11, 2001, World Trade Center (WTC) attacks. The estimation includes several new features. First, is the collection of data on the relocation of firms displaced by the attack, the major source of resilience in muting the direct impacts of the event. Second, is a new estimate of the major source of impacts off-site -- the ensuing decline of air travel and related tourism in the U.S. due to the social amplification of the fear of terrorism. Third, the estimation is performed for the first time using Computable General Equilibrium (CGE) analysis, including a new approach to reflecting the direct effects of external shocks. This modeling framework has many advantages in this application, such as the ability to include behavioral responses of individual businesses and households, to incorporate features of inherent and adaptive resilience at the level of the individual decision maker and the market, and to gauge quantity and price interaction effects across sectors of the regional and national economies. We find that the total business interruption losses from the WTC attacks on the U.S. economy were only slightly over $100 billion, or less than 1.0% of Gross Domestic Product. The impacts were only a loss of $14 billion of Gross Regional Product for the New York Metropolitan Area.

  1. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang


    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  2. Computationally feasible estimation of haplotype frequencies from pooled DNA with and without Hardy-Weinberg equilibrium. (United States)

    Kuk, Anthony Y C; Zhang, Han; Yang, Yaning


    Pooling large number of DNA samples is a common practice in association study, especially for initial screening. However, the use of expectation-maximization (EM)-type algorithms in estimating haplotype distributions for even moderate pool sizes is hampered by the computational complexity involved. A novel constrained EM algorithm called PoooL has been proposed recently to bypass the difficulty via the use of asymptotic normality of the pooled allele frequencies. The resulting estimates are, however, not maximum likelihood estimates and hence not optimal. Furthermore, the assumption of Hardy-Weinberg equilibrium (HWE) made may not be realistic in practice. Rather than carrying out constrained maximization as in PoooL, we revert to the usual EM algorithm but make it computationally feasible by using normal approximations. The resulting algorithm is much simpler to implement than PoooL because there is no need to invoke sophisticated iterative scaling methods as in PoooL. We also develop an estimating equation analogue of the EM algorithm for the case of Hardy-Weinberg disequilibrium (HWD) by conditioning on the haplotypes of both chromosomes of the same individual. Incorporated into the method is a way of estimating the inbreeding coefficient by relating it to overdispersion. Simulation study assuming HWE shows that our simplified implementation of the EM algorithm leads to estimates with substantially smaller SDs than PoooL estimates. Further simulations show that ignoring HWD will induce biases in the estimates. Our extended method with estimation of inbreeding coefficient incorporated is able to reduce the bias leading to estimates with substantially smaller mean square errors. We also present results to suggest that our method can cope with a certain degree of locus-specific inbreeding as well as additional overdispersion not caused by inbreeding. approximately ynyang/aem-aes

  3. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    DEFF Research Database (Denmark)

    Papior, Nick Rübner; Lorente, Nicolás; Frederiksen, Thomas


    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT–NEGF code handles devices with one or multiple electrodes (Ne≥1) with individual chemical potentials and electronic tem...

  4. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola


    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  5. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques (United States)


    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  6. New Information Dispersal Techniques for Trustworthy Computing (United States)

    Parakh, Abhishek


    Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…

  7. Computer graphics techniques and computer-generated movies (United States)

    Holzman, Robert E.; Blinn, James F.


    The JPL Computer Graphics Laboratory (CGL) has been using advanced computer graphics for more than ten years to simulate space missions and related activities. Applications have ranged from basic computer graphics used interactively to allow engineers to study problems, to sophisticated color graphics used to simulate missions and produce realistic animations and stills for use by NASA and the scientific press. In addition, the CGL did the computer animation for ``Cosmos'', a series of general science programs done for Public Television in the United States by Carl Sagan and shown world-wide. The CGL recently completed the computer animation for ``The Mechanical Universe'', a series of fifty-two half-hour elementary physics lectures, led by Professor David Goodstein of the California Institute of Technology, and now being shown on Public Television in the US. For this series, the CGL produced more than seven hours of computer animation, averaging approximately eight minutes and thirty seconds of computer animation per half-hour program. Our aim at the JPL Computer Graphics Laboratory (CGL) is the realistic depiction of physical phenomena, that is, we deal primarily in ``science education'' rather than in scientific research. Of course, our attempts to render physical events realistically often require the development of new capabilities through research or technology advances, but those advances are not our primary goal.

  8. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction. (United States)

    Nezarat, Amin; Dastghaibifard, G H


    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  9. The mineral sector and economic development in Ghana: A computable general equilibrium analysis (United States)

    Addy, Samuel N.

    A computable general equilibrium model (CGE) model is formulated for conducting mineral policy analysis in the context of national economic development for Ghana. The model, called GHANAMIN, places strong emphasis on production, trade, and investment. It can be used to examine both micro and macro economic impacts of policies associated with mineral investment, taxation, and terms of trade changes, as well as mineral sector performance impacts due to technological change or the discovery of new deposits. Its economywide structure enables the study of broader development policy with a focus on individual or multiple sectors, simultaneously. After going through a period of contraction for about two decades, mining in Ghana has rebounded significantly and is currently the main foreign exchange earner. Gold alone contributed 44.7 percent of 1994 total export earnings. GHANAMIN is used to investigate the economywide impacts of mineral tax policies, world market mineral prices changes, mining investment, and increased mineral exports. It is also used for identifying key sectors for economic development. Various simulations were undertaken with the following results: Recently implemented mineral tax policies are welfare increasing, but have an accompanying decrease in the output of other export sectors. World mineral price rises stimulate an increase in real GDP; however, this increase is less than real GDP decreases associated with price declines. Investment in the non-gold mining sector increases real GDP more than investment in gold mining, because of the former's stronger linkages to the rest of the economy. Increased mineral exports are very beneficial to the overall economy. Foreign direct investment (FDI) in mining increases welfare more so than domestic capital, which is very limited. Mining investment and the increased mineral exports since 1986 have contributed significantly to the country's economic recovery, with gold mining accounting for 95 percent of the

  10. Modeling the economic costs of disasters and recovery: analysis using a dynamic computable general equilibrium model (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.


    Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.

  11. Cloud Computing Techniques for Space Mission Design (United States)

    Arrieta, Juan; Senent, Juan


    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  12. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C


    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  13. CO2, energy and economy interactions: A multisectoral, dynamic, computable general equilibrium model for Korea (United States)

    Kang, Yoonyoung

    While vast resources have been invested in the development of computational models for cost-benefit analysis for the "whole world" or for the largest economies (e.g. United States, Japan, Germany), the remainder have been thrown together into one model for the "rest of the world." This study presents a multi-sectoral, dynamic, computable general equilibrium (CGE) model for Korea. This research evaluates the impacts of controlling COsb2 emissions using a multisectoral CGE model. This CGE economy-energy-environment model analyzes and quantifies the interactions between COsb2, energy and economy. This study examines interactions and influences of key environmental policy components: applied economic instruments, emission targets, and environmental tax revenue recycling methods. The most cost-effective economic instrument is the carbon tax. The economic effects discussed include impacts on main macroeconomic variables (in particular, economic growth), sectoral production, and the energy market. This study considers several aspects of various COsb2 control policies, such as the basic variables in the economy: capital stock and net foreign debt. The results indicate emissions might be stabilized in Korea at the expense of economic growth and with dramatic sectoral allocation effects. Carbon dioxide emissions stabilization could be achieved to the tune of a 600 trillion won loss over a 20 year period (1990-2010). The average annual real GDP would decrease by 2.10% over the simulation period compared to the 5.87% increase in the Business-as-Usual. This model satisfies an immediate need for a policy simulation model for Korea and provides the basic framework for similar economies. It is critical to keep the central economic question at the forefront of any discussion regarding environmental protection. How much will reform cost, and what does the economy stand to gain and lose? Without this model, the policy makers might resort to hesitation or even blind speculation. With

  14. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo


    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  15. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos


    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  16. Statistical Techniques in Electrical and Computer Engineering

    Indian Academy of Sciences (India)

    Stochastic models and statistical inference from them have been popular methodologies in a variety of engineering disciplines, notably in electrical and computer engineering. Recent years have seen explosive growth in this area, driven by technological imperatives. These now go well beyond their traditional domain of ...

  17. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.


    Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  18. SLDR: a computational technique to identify novel genetic regulatory relationships


    Yue, Zongliang; Wan, Ping; Huang, Hui; Xie, Zhan; Chen, Jake Y


    We developed a new computational technique called Step-Level Differential Response (SLDR) to identify genetic regulatory relationships. Our technique takes advantages of functional genomics data for the same species under different perturbation conditions, therefore complementary to current popular computational techniques. It can particularly identify "rare" activation/inhibition relationship events that can be difficult to find in experimental results. In SLDR, we model each candidate targe...

  19. Computational techniques in queueing and fluctuation theory

    NARCIS (Netherlands)

    Mohammad Asghari, N.


    The main objective of this thesis is to develop numerical techniques to calculate the probability distribution of the running maximum of Lévy processes, and consider a number of specific financial applications. The other objective is to propose a numerical method to optimize the energy consumption

  20. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab


    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  1. Exploiting analytics techniques in CMS computing monitoring (United States)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.


    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  2. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model (United States)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.


    The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232.

  3. Dentomaxillofacial imaging with computed-radiography techniques: a preliminary study (United States)

    Shaw, Chris C.; Kapa, Stanley F.; Furkart, Audrey J.; Gur, David


    A preliminary study was conducted to investigate the feasibility of using high resolution computed radiography techniques for dentomaxillofacial imaging. Storage phosphors were cut into various sizes and used with an experimental laser scanning reader for three different imaging procedures: intraoral, cephalometric and panoramic. Both phantom and patient images were obtained for comparing the computed radiography technique with the conventional screen/film or dental film techniques. It has been found that current computed radiography techniques are largely adequate for cephalometric and panoramic imaging but need further improvement on their spatial resolution capability for intraoral imaging. In this paper, the methods of applying the computer radiography techniques to dentomaxillofacial imaging are described and discussed. Images of phantoms, resolution bar patterns and patients are presented and compared. Issues on image quality and cost are discussed.

  4. Virtual computed tomography gastroscopy: a new technique. (United States)

    Springer, P; Dessl, A; Giacomuzzi, S M; Buchberger, W; Stöger, A; Oberwalder, M; Jaschke, W


    The aim of the present study was to establish a suitable method for virtual computed tomography (CT) gastroscopy. Three-millimeter helical CT scans of a pig stomach were obtained after air insufflation and instillation of diluted diatrizoic acid (Gastrografin), and with double contrast. In addition, three patients with gastric tumors were studied after ingestion of an effervescent agent (Duplotrast, 6 g) and intravenous injection of hyoscine butylbromide (Buscopan, 1 ml). Virtual endoscopy images were computed on a Sun Sparc 20 workstation (128 megabytes of random access memory, four gigabytes of hard disk space), using dedicated software (Navigator, General Electric Medical System Company). The endoscopy sequences were compared with real endoscopic examinations and with anatomical specimens. In the cadaver studies, the best results were obtained with plain air insufflation, whereas virtual CT gastroscopy with diluted contrast and with double contrast showed artifacts simulating polyps, erosions, and flat ulcers. Patient studies showed good correlation with the fiberoptic endoscopy findings, although large amounts of retained gastric fluid substantially reduced the quality of the surface reconstruction. These preliminary results show that virtual CT gastroscopy is able to provide insights into the upper gastrointestinal tract similar to those of fiberoptic endoscopy. However, due to the limited spatial resolution of the CT protocol used, as well as inherent image artifacts associated with the Navigator program's reconstruction algorithm, the form of virtual CT gastroscopy studied was not capable of competing with the imaging quality provided by fiberoptic gastroscopy.

  5. A computer graphics display technique for the examination of aircraft design data (United States)

    Talcott, N. A., Jr.


    An interactive computer graphics technique has been developed for quickly sorting and interpreting large amounts of aerodynamic data. It utilizes a graphic representation rather than numbers. The geometry package represents the vehicle as a set of panels. These panels are ordered in groups of ascending values (e.g., equilibrium temperatures). The groups are then displayed successively on a CRT building up to the complete vehicle. A zoom feature allows for displaying only the panels with values between certain limits. The addition of color allows a one-time display thus eliminating the need for a display build up.

  6. Evolutionary Computation Techniques for Predicting Atmospheric Corrosion

    Directory of Open Access Journals (Sweden)

    Amine Marref


    Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.

  7. Computational techniques used in the development of coprocessing flowsheets [SEPHIS

    Energy Technology Data Exchange (ETDEWEB)

    Groenier, W. S.; Mitchell, A. D.; Jubin, R. T.


    The computer program SEPHIS, developed to aid in determining optimum solvent extraction conditions for the reprocessing of nuclear power reactor fuels by the Purex method, is described. The program employs a combination of approximate mathematical equilibrium expressions and a transient, stagewise-process calculational method to allow stage and product-stream concentrations to be predicted with accuracy and reliability. The possible applications to inventory control for nuclear material safeguards, nuclear criticality analysis, and process analysis and control are of special interest. The method is also applicable to other counntercurrent liquid--liquid solvent extraction processes having known chemical kinetics, that may involve multiple solutes and are performed in conventional contacting equipment

  8. An improved data transfer and storage technique for hybrid computation (United States)

    Hansing, A. M.


    Improved technique was developed for transferring and storing data at faster than real time speeds on hybrid computer. Predominant advantage is combined use of electronic relays, track and store units, and analog-to-digital and digital-to-analog conversion units of hybrid computer.

  9. Computation Techniques for the Volume of a Tetrahedron (United States)

    Srinivasan, V. K.


    The purpose of this article is to discuss specific techniques for the computation of the volume of a tetrahedron. A few of them are taught in the undergraduate multivariable calculus courses. Few of them are found in text books on coordinate geometry and synthetic solid geometry. This article gathers many of these techniques so as to constitute a…

  10. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.


    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  11. Non-equilibrium grain boundaries in titanium nanostructured by severe plastic deformation: Computational study of sources of material strengthening

    DEFF Research Database (Denmark)

    Liu, Hongsheng; Mishnaevsky, Leon; Pantleon, Wolfgang


    A computational model of ultrafine grained (UFG) or nanostructured titanium (Ti), based on a finite element (FE) unit cell model of the material and a dislocation density based model of plastic deformation has been developed. FE simulations of tensile deformation of UFG Ti with different fractions...... and properties of the grain boundary (GB) phase have been carried out. The effect of different degrees of deviation from the equilibrium state of the grain boundaries (GBs) on the mechanical behaviour of nanostructured Ti have been investigated using the combined composite/dislocation dynamics based model...

  12. Development and Assessment of a Computer-Based Equation of State for Equilibrium Air (United States)


    and a thermally imperfect term. This philosophy will be used again and extended later. Regardless of the approach, pressure explicit or free-energy...below ambient was about 1.29*10-10 kg/m3. The limit for current EOS is 10-12 kg/m3. The reduced minimum is more for aesthetic reasons than for...atomic species present at equilibrium but neglected in the AEDC Mollier 2008 EOS. Approach The philosophy for assessing the accuracy in high

  13. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)


    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  14. Determination of free Zn2+ concentration in synthetic and natural samples with AGNES (Absence of Gradients and Nernstian Equilibrium Stripping) and DMT (Donnan Membrane Techniques)

    NARCIS (Netherlands)

    Chito, D.; Weng, L.P.; Galceran, J.; Companys, E.; Puy, J.; Riemsdijk, van W.H.; Leeuwen, van H.P.


    The determination of free Zn2+ ion concentration is a key in the study of environmental systems like river water and soils, due to its impact on bioavailability and toxicity. AGNES (Absence of Gradients and Nernstian Equilibrium Stripping) and DMT (Donnan Membrane Technique) are emerging techniques

  15. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    Directory of Open Access Journals (Sweden)

    Hrubý Jan


    Full Text Available Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  16. Is the microscopic stress computed from molecular simulations in mechanical equilibrium? (United States)

    Torres-Sánchez, Alejandro; Vanegas, Juan M.; Arroyo, Marino

    The microscopic stress field connects atomistic simulations with the mechanics of materials at the nano-scale through statistical mechanics. However, its definition remains ambiguous. In a recent work we showed that this is not only a theoretical problem, but rather that it greatly affects local stress calculations from molecular simulations. We find that popular definitions of the local stress, which are continuously being employed to understand the mechanics of various systems at the nanoscale, violate the continuum statements of mechanical equilibrium. We exemplify these facts in local stress calculations of defective graphene, lipid bilayers, and fibrous proteins. Furthermore, we propose a new physical and sound definition of the microscopic stress that satisfies the continuum equations of balance, irrespective of the many-body nature of the inter-atomic potential. Thus, our proposal provides an unambiguous link between discrete-particle models and continuum mechanics at the nanoscale.

  17. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K


    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  18. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil


    , mainly because of their complexity, the precise in-plane organization of lipids and proteins and their stability in biological membranes remain difficult to elucidate. This has reiterated the importance of understanding the equilibrium phase behavior and the kinetics of fluid multicomponent lipid...... of the membrane. Experiments indicate that biomembranes of eukaryotic cells may be laterally organized into small nanoscopic domains. This inplane organization is expected to play an important role in a variety of physiological functions such as signaling, recruitment of specific proteins and endocytosis. However...... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...

  19. GEM-E3: A computable general equilibrium model applied for Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Bahn, O. [Paul Scherrer Inst., CH-5232 Villigen PSI (Switzerland); Frei, C. [Ecole Polytechnique Federale de Lausanne (EPFL) and Paul Scherrer Inst. (Switzerland)


    The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO{sub 2} emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO{sub 2} emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO{sub 2} emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve

  20. Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding. (United States)

    Montalvo-Acosta, Joel José; Cecchini, Marco


    The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He


    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  2. a new approach to concrete mix design using computer techniques

    African Journals Online (AJOL)

    Engr. Vincent okoloekwe

    The software package for this process can be accessed from the. Internet [3]. METHODOLOGY. Modeling requires substantial data for a truly representative model to be developed. The approach adopted in this. A NEW APPROACH TO CONCRETE MIX DESIGN USING COMPUTER TECHNIQUES 27. NIGERIAN JOURNAL ...

  3. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P


    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are


    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.


    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...

  5. A Computer Aided System for Correlation and Prediction of Phase Equilibrium Data

    DEFF Research Database (Denmark)

    Nielsen, T.L.; Gani, Rafiqul


    based on mathematical programming. This paper describes the development of a computer aided system for the systematic derivation of appropriate property models to be used in the service role for a specified problem. As a first step, a library of well-known property models ha's been developed...

  6. Temporomandibular joint computed tomography: development of a direct sagittal technique

    Energy Technology Data Exchange (ETDEWEB)

    van der Kuijl, B.; Vencken, L.M.; de Bont, L.G.; Boering, G. (Univ. of Groningen, (Netherlands))


    Radiology plays an important role in the diagnosis of temporomandibular disorders. Different techniques are used with computed tomography offering simultaneous imaging of bone and soft tissues. It is therefore suited for visualization of the articular disk and may be used in patients with suspected internal derangements and other disorders of the temporomandibular joint. Previous research suggests advantages to direct sagittal scanning, which requires special positioning of the patient and a sophisticated scanning technique. This study describes the development of a new technique of direct sagittal computed tomographic imaging of the temporomandibular joint using a specially designed patient table and internal light visor positioning. No structures other than the patient's head are involved in the imaging process, and misleading artifacts from the arm or the shoulder are eliminated. The use of the scanogram allows precise correction of the condylar axis and selection of exact slice level.

  7. Bone tissue engineering scaffolding: computer-aided scaffolding techniques. (United States)

    Thavornyutikarn, Boonlom; Chantarapanich, Nattapon; Sitthiseripratip, Kriskrai; Thouas, George A; Chen, Qizhi

    Tissue engineering is essentially a technique for imitating nature. Natural tissues consist of three components: cells, signalling systems (e.g. growth factors) and extracellular matrix (ECM). The ECM forms a scaffold for its cells. Hence, the engineered tissue construct is an artificial scaffold populated with living cells and signalling molecules. A huge effort has been invested in bone tissue engineering, in which a highly porous scaffold plays a critical role in guiding bone and vascular tissue growth and regeneration in three dimensions. In the last two decades, numerous scaffolding techniques have been developed to fabricate highly interconnective, porous scaffolds for bone tissue engineering applications. This review provides an update on the progress of foaming technology of biomaterials, with a special attention being focused on computer-aided manufacturing (Andrade et al. 2002) techniques. This article starts with a brief introduction of tissue engineering (Bone tissue engineering and scaffolds) and scaffolding materials (Biomaterials used in bone tissue engineering). After a brief reviews on conventional scaffolding techniques (Conventional scaffolding techniques), a number of CAM techniques are reviewed in great detail. For each technique, the structure and mechanical integrity of fabricated scaffolds are discussed in detail. Finally, the advantaged and disadvantage of these techniques are compared (Comparison of scaffolding techniques) and summarised (Summary).

  8. A review of metaheuristic scheduling techniques in cloud computing

    Directory of Open Access Journals (Sweden)

    Mala Kalra


    Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.

  9. DEMONIC programming: a computational language for single-particle equilibrium thermodynamics, and its formal semantics.

    Directory of Open Access Journals (Sweden)

    Samson Abramsky


    Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.

  10. Training Software in Artificial-Intelligence Computing Techniques (United States)

    Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene


    The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.

  11. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity (United States)

    Pak, Chan-Gi; Lung, Shun-Fat


    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  12. General method and thermodynamic tables for computation of equilibrium composition and temperature of chemical reactions (United States)

    Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E


    A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.

  13. Computer-Assisted Technique for Surgical Tooth Extraction

    Directory of Open Access Journals (Sweden)

    Hosamuddin Hamza


    Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  14. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong


    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  15. Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques (United States)


    MEMRISTOR-BASED COMPUTING ARCHITECTURE: DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...5d. PROJECT NUMBER T2NC 5e. TASK NUMBER PO 5f. WORK UNIT NUMBER LY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Polytechnic Institute of...Prospects,” AAPPS Bulletin , vol. 18, December 2008, pp. 33. [15] P. Krzysteczko, G. Reiss, and A. Thomas, “Memristive switching of MgO based magnetic

  16. Computer Package of Metasubject Results Valuation Techniques in Elementary School


    Ulanovskaya I.M.,


    The new Federal state educational standards define requirements for metasubject results of primary schooling. For their assessment, diagnostic package and test methods were developed in Psychological Institute of Russian Academy of Education and Moscow State University of Psychology and Education. A computer version of this package is provided. It includes techniques "Permutations" (author A.Z. Zak), "Calendar" (authors G.A. Zuckerman and O.L. Obukhova), "Quests of Mathematics" (authors S.F. ...

  17. Integration of Computational Techniques for the Modelling of Signal Transduction


    Perez, Pedro Pablo Gonzalez; Garcia, Maura Cardenas; Gershenson, Carlos; Lagunez-Otero, Jaime


    A cell can be seen as an adaptive autonomous agent or as a society of adaptive autonomous agents, where each can exhibit a particular behaviour depending on its cognitive capabilities. We present an intracellular signalling model obtained by integrating several computational techniques into an agent-based paradigm. Cellulat, the model, takes into account two essential aspects of the intracellular signalling networks: cognitive capacities and a spatial organization. Exemplifying the functional...

  18. A survey of computational intelligence techniques in protein function prediction. (United States)

    Tiwari, Arvind Kumar; Srivastava, Rajeev


    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.

  19. Adsorption of the herbicides diquat and difenzoquat on polyurethane foam: Kinetic, equilibrium and computational studies. (United States)

    Vinhal, Jonas O; Nege, Kassem K; Lage, Mateus R; de M Carneiro, José Walkimar; Lima, Claudio F; Cassella, Ricardo J


    This work reports a study about the adsorption of the herbicides diquat and difenzoquat from aqueous medium employing polyurethane foam (PUF) as the adsorbent and sodium dodecylsulfate (SDS) as the counter ion. The adsorption efficiency was shown to be dependent on the concentration of SDS in solution, since the formation of an ion-associate between cationic herbicides (diquat and difenzoquat) and anionic dodecylsulfate is a fundamental step of the process. A computational study was carried out to identify the possible structure of the ion-associates that are formed in solution. They are probably formed by three units of dodecylsulfate bound to one unit of diquat, and two units of dodecylsulfate bound to one unit of difenzoquat. The results obtained also showed that 95% of both herbicides present in 45mL of a solution containing 5.5mgL-1 could be retained by 300mg of PUF. The experimental data were well adjusted to the Freundlich isotherm (r2 ≥ 0.95) and to the pseudo-second-order kinetic equation. Also, the application of Morris-Weber and Reichenberg equations indicated that an intraparticle diffusion process is active in the control of adsorption kinetics. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Predicting the buoyancy, equilibrium and potential swimming ability of giraffes by computational analysis. (United States)

    Henderson, Donald M; Naish, Darren


    Giraffes (Giraffa camelopardalis) are often stated to be unable to swim, and while few observations supporting this have ever been offered, we sought to test the hypothesis that giraffes exhibited a body shape or density unsuited for locomotion in water. We assessed the floating capability of giraffes by simulating their buoyancy with a three-dimensional mathematical/computational model. A similar model of a horse (Equus caballus) was used as a control, and its floating behaviour replicates the observed orientations of immersed horses. The floating giraffe model has its neck sub-horizontal, and the animal would struggle to keep its head clear of the water surface. Using an isometrically scaled-down giraffe model with a total mass equal to that of the horse, the giraffe's proportionally larger limbs have much higher rotational inertias than do those of horses, and their wetted surface areas are 13.5% greater relative to that of the horse, thus making rapid swimming motions more strenuous. The mean density of the giraffe model (960 gm/l) is also higher than that of the horse (930 gm/l), and closer to that causing negative buoyancy (1000 gm/l). A swimming giraffe - forced into a posture where the neck is sub-horizontal and with a thorax that is pulled downwards by the large fore limbs - would not be able to move the neck and limbs synchronously as giraffes do when moving on land, possibly further hampering the animal's ability to move its limbs effectively underwater. We found that a full-sized, adult giraffe will become buoyant in water deeper than 2.8m. While it is not impossible for giraffes to swim, we speculate that they would perform poorly compared to other mammals and are hence likely to avoid swimming if possible. (c) 2010. Published by Elsevier Ltd. All rights reserved.

  1. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)


    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  2. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail:; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel


    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  3. Phase equilibrium engineering

    CERN Document Server

    Brignole, Esteban Alberto


    Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and

  4. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.


    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  5. Protein-RNA interactions: structural biology and computational modeling techniques. (United States)

    Jones, Susan


    RNA-binding proteins are functionally diverse within cells, being involved in RNA-metabolism, translation, DNA damage repair, and gene regulation at both the transcriptional and post-transcriptional levels. Much has been learnt about their interactions with RNAs through structure determination techniques and computational modeling. This review gives an overview of the structural data currently available for protein-RNA complexes, and discusses the technical issues facing structural biologists working to solve their structures. The review focuses on three techniques used to solve the 3-dimensional structure of protein-RNA complexes at atomic resolution, namely X-ray crystallography, solution nuclear magnetic resonance (NMR) and cryo-electron microscopy (cryo-EM). The review then focuses on the main computational modeling techniques that use these atomic resolution data: discussing the prediction of RNA-binding sites on unbound proteins, docking proteins, and RNAs, and modeling the molecular dynamics of the systems. In conclusion, the review looks at the future directions this field of research might take.

  6. Air pollution-induced health impacts on the national economy of China: demonstration of a computable general equilibrium approach. (United States)

    Wan, Yue; Yang, Hongwei; Masui, Toshihiko


    At the present time, ambient air pollution is a serious public health problem in China. Based on the concentration-response relationship provided by international and domestic epidemiologic studies, the authors estimated the mortality and morbidity induced by the ambient air pollution of 2000. To address the mechanism of the health impact on the national economy, the authors applied a computable general equilibrium (CGE) model, named AIM/Material China, containing 39 production sectors and 32 commodities. AIM/Material analyzes changes of the gross domestic product (GDP), final demand, and production activity originating from health damages. If ambient air quality met Grade II of China's air quality standard in 2000, then the avoidable GDP loss would be 0.38%o of the national total, of which 95% was led by labor loss. Comparatively, medical expenditure had less impact on national economy, which is explained from the aspect of the final demand by commodities and the production activities by sectors. The authors conclude that the CGE model is a suitable tool for assessing health impacts from a point of view of national economy through the discussion about its applicability.

  7. Economic Impacts of Potential Foot and Mouth Disease Agro-terrorism in the United States: A Computable General Equilibrium Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois


    The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.

  8. Reducing CO{sub 2}- emissions under fiscal retrenchment. A multi-cohort computable general equilibrium (CGE) model for Austria

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, K.; Steininger, K.W. [Department of Economics, University of Graz, Graz (Austria)


    The stabilization of budget deficit and budget debt ratios by fiscal retrenchment in order to fulfill the Maastricht criteria for the European Monetary Union (EMU) is of central focus in most European Union (EU) countries. At the same time the national policy dimension of acute environmental problems such as global warming has receded in the public eye. The environmental dimension nonetheless remains urgent, and a re-evaluation of the prospects of CO{sub 2}-policy is needed against the background of fiscal retrenchment required by supranational obligations. We shall do this for the small, open, Austrian economy by constructing a dynamic multi-cohort computable general equilibrium (CGE) model enabling us to assess quantitatively the lifetime welfare impacts on the cohorts affected by three different options for using CO{sub 2}-permit revenues. The distribution of welfare costs of (Toronto-) CO{sub 2}-policy across cohorts significantly differs with use. This is explained by income, inheritance and price effects. 42 refs.

  9. Traffic simulations on parallel computers using domain decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Tentner, A.M.


    Large scale simulations of Intelligent Transportation Systems (ITS) can only be achieved by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic simulations with the standard simulation package TRAF-NETSIM on a 128 nodes IBM SPx parallel supercomputer as well as on a cluster of SUN workstations. Whilst this particular parallel implementation is based on NETSIM, a microscopic traffic simulation model, the presented strategy is applicable to a broad class of traffic simulations. An outer iteration loop must be introduced in order to converge to a global solution. A performance study that utilizes a scalable test network that consist of square-grids is presented, which addresses the performance penalty introduced by the additional iteration loop.

  10. SLDR: a computational technique to identify novel genetic regulatory relationships. (United States)

    Yue, Zongliang; Wan, Ping; Huang, Hui; Xie, Zhan; Chen, Jake Y


    We developed a new computational technique called Step-Level Differential Response (SLDR) to identify genetic regulatory relationships. Our technique takes advantages of functional genomics data for the same species under different perturbation conditions, therefore complementary to current popular computational techniques. It can particularly identify "rare" activation/inhibition relationship events that can be difficult to find in experimental results. In SLDR, we model each candidate target gene as being controlled by N binary-state regulators that lead to ≤2N observable states ("step-levels") for the target. We applied SLDR to the study of the GEO microarray data set GSE25644, which consists of 158 different mutant S. cerevisiae gene expressional profiles. For each target gene t, we first clustered ordered samples into various clusters, each approximating an observable step-level of t to screen out the "de-centric" target. Then, we ordered each gene x as a candidate regulator and aligned t to x for the purpose of examining the step-level correlations between low expression set of x (Ro) and high expression set of x (Rh) from the regulator x to t, by finding max f(t, x): |Ro-Rh| over all candidate × in the genome for each t. We therefore obtained activation and inhibitions events from different combinations of Ro and Rh. Furthermore, we developed criteria for filtering out less-confident regulators, estimated the number of regulators for each target t, and evaluated identified top-ranking regulator-target relationship. Our results can be cross-validated with the Yeast Fitness database. SLDR is also computationally efficient with o(N²) complexity. In summary, we believe SLDR can be applied to the mining of functional genomics big data for future network biology and network medicine applications.

  11. Computer image processing - The Viking experience. [digital enhancement techniques (United States)

    Green, W. B.


    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  12. The analysis of gastric function using computational techniques

    CERN Document Server

    Young, P


    The work presented in this thesis was carried out at the Magnetic Resonance Centre, Department of Physics and Astronomy, University of Nottingham, between October 1996 and June 2000. This thesis describes the application of computerised techniques to the analysis of gastric function, in relation to Magnetic Resonance Imaging data. The implementation of a computer program enabling the measurement of motility in the lower stomach is described in Chapter 6. This method allowed the dimensional reduction of multi-slice image data sets into a 'Motility Plot', from which the motility parameters - the frequency, velocity and depth of contractions - could be measured. The technique was found to be simple, accurate and involved substantial time savings, when compared to manual analysis. The program was subsequently used in the measurement of motility in three separate studies, described in Chapter 7. In Study 1, four different meal types of varying viscosity and nutrient value were consumed by 12 volunteers. The aim of...

  13. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M


    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  14. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto


    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  15. Sampling the Denatured State of Polypeptides in Water, Urea, and Guanidine Chloride to Strict Equilibrium Conditions with the Help of Massively Parallel Computers. (United States)

    Meloni, Roberto; Camilloni, Carlo; Tiana, Guido


    The denatured state of polypeptides and proteins, stabilized by chemical denaturants like urea and guanidine chloride, displays residual secondary structure when studied by nuclear-magnetic-resonance spectroscopy. However, these experimental techniques are weakly sensitive, and thus molecular-dynamics simulations can be useful to complement the experimental findings. To sample the denatured state, we made use of massively-parallel computers and of a variant of the replica exchange algorithm, in which the different branches, connected with unbiased replicas, favor the formation and disruption of local secondary structure. The algorithm is applied to the second hairpin of GB1 in water, in urea, and in guanidine chloride. We show with the help of different criteria that the simulations converge to equilibrium. It results that urea and guanidine chloride, besides inducing some polyproline-II structure, have different effect on the hairpin. Urea disrupts completely the native region and stabilizes a state which resembles a random coil, while guanidine chloride has a milder effect.

  16. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures. (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M


    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  17. Teaching Computational Geophysics Classes using Active Learning Techniques (United States)

    Keers, H.; Rondenay, S.; Harlap, Y.; Nordmo, I.


    We give an overview of our experience in teaching two computational geophysics classes at the undergraduate level. In particular we describe The first class is for most students the first programming class and assumes that the students have had an introductory course in geophysics. In this class the students are introduced to basic Matlab skills: use of variables, basic array and matrix definition and manipulation, basic statistics, 1D integration, plotting of lines and surfaces, making of .m files and basic debugging techniques. All of these concepts are applied to elementary but important concepts in earthquake and exploration geophysics (including epicentre location, computation of travel time curves for simple layered media plotting of 1D and 2D velocity models etc.). It is important to integrate the geophysics with the programming concepts: we found that this enhances students' understanding. Moreover, as this is a 3 year Bachelor program, and this class is taught in the 2nd semester, there is little time for a class that focusses on only programming. In the second class, which is optional and can be taken in the 4th or 6th semester, but often is also taken by Master students we extend the Matlab programming to include signal processing and ordinary and partial differential equations, again with emphasis on geophysics (such as ray tracing and solving the acoustic wave equation). This class also contains a project in which the students have to write a brief paper on a topic in computational geophysics, preferably with programming examples. When teaching these classes it was found that active learning techniques, in which the students actively participate in the class, either individually, in pairs or in groups, are indispensable. We give a brief overview of the various activities that we have developed when teaching theses classes.

  18. The economy-wide impact of pandemic influenza on the UK: a computable general equilibrium modelling experiment. (United States)

    Smith, Richard D; Keogh-Brown, Marcus R; Barnett, Tony; Tait, Joyce


    To estimate the potential economic impact of pandemic influenza, associated behavioural responses, school closures, and vaccination on the United Kingdom. A computable general equilibrium model of the UK economy was specified for various combinations of mortality and morbidity from pandemic influenza, vaccine efficacy, school closures, and prophylactic absenteeism using published data. The 2004 UK economy (the most up to date available with suitable economic data). The economic impact of various scenarios with different pandemic severity, vaccination, school closure, and prophylactic absenteeism specified in terms of gross domestic product, output from different economic sectors, and equivalent variation. The costs related to illness alone ranged between 0.5% and 1.0% of gross domestic product ( pound8.4bn to pound16.8bn) for low fatality scenarios, 3.3% and 4.3% ( pound55.5bn to pound72.3bn) for high fatality scenarios, and larger still for an extreme pandemic. School closure increases the economic impact, particularly for mild pandemics. If widespread behavioural change takes place and there is large scale prophylactic absence from work, the economic impact would be notably increased with few health benefits. Vaccination with a pre-pandemic vaccine could save 0.13% to 2.3% of gross domestic product ( pound2.2bn to pound38.6bn); a single dose of a matched vaccine could save 0.3% to 4.3% ( pound5.0bn to pound72.3bn); and two doses of a matched vaccine could limit the overall economic impact to about 1% of gross domestic product for all disease scenarios. Balancing school closure against "business as usual" and obtaining sufficient stocks of effective vaccine are more important factors in determining the economic impact of an influenza pandemic than is the disease itself. Prophylactic absence from work in response to fear of infection can add considerably to the economic impact.

  19. Computable general equilibrium modelling of economic impacts from volcanic event scenarios at regional and national scale, Mt. Taranaki, New Zealand (United States)

    McDonald, G. W.; Cronin, S. J.; Kim, J.-H.; Smith, N. J.; Murray, C. A.; Procter, J. N.


    The economic impacts of volcanism extend well beyond the direct costs of loss of life and asset damage. This paper presents one of the first attempts to assess the economic consequences of disruption associated with volcanic impacts at a range of temporal and spatial scales using multi-regional and dynamic computable general equilibrium (CGE) modelling. Based on the last decade of volcanic research findings at Mt. Taranaki, three volcanic event scenarios (Tahurangi, Inglewood and Opua) differentiated by critical physical thresholds were generated. In turn, the corresponding disruption economic impacts were calculated for each scenario. Under the Tahurangi scenario (annual probability of 0.01-0.02), a small-scale explosive (Volcanic Explosivity Index (VEI) 2-3) and dome forming eruption, the economic impacts were negligible with complete economic recovery experienced within a year. The larger Inglewood sub-Plinian to Plinian eruption scenario event (VEI > 4, annualised probability of 0.003) produced significant impacts on the Taranaki region economy of 207 million (representing 4.0% of regional gross domestic product (GDP) 1 year after the event, 2007 New Zealand dollars), that will take around 5 years to recover. The Opua scenario, the largest magnitude volcanic hazard modelled, is a major flank collapse and debris avalanche event with an annual probability of 0.00018. The associated economic impacts of this scenario were 397 million (representing 7.7% of regional GDP 1 year after the event) with the Taranaki region economy suffering permanent structural changes. Our dynamic analysis illustrates that different economic impacts play out at different stages in a volcanic crisis. We also discuss the key strengths and weaknesses of our modelling along with potential extensions.

  20. Application of computer-aided designed/computer-aided manufactured techniques in reconstructing maxillofacial bony structures. (United States)

    Rustemeyer, Jan; Busch, Alexander; Sari-Rieger, Aynur


    Today, virtually planned surgery and computer-aided designed/computer-aided manufactured (CAD/CAM) tools to reconstruct bony structures are being increasingly applied to maxillofacial surgery. However, the criteria for or against the usage of the CAD/CAM technique are disputable, since no evidence-based studies are available. Theoretically, the CAD/CAM technique should be applied to complex cases. In this case report, we present our experiences and discuss the criteria for application. Three cases are reported in which subjects received an osseous reconstruction using CAD/CAM techniques. In the first case, resection of the mandibular body and ramus was carried out, and reconstruction with a vascularised iliac bone transplant was performed. During surgery, a repositioning of the ipsilateral condyle was necessary. The second case comprised a wide mandibular reconstruction together with a repositioning of the condyles and the soft tissue chin using a two-segment osteomyocutaneous fibula flap. In the third case, a two-flap technique consisting of a double-barrelled osseous fibula flap and a radial forearm flap was applied to cover a wide palatine defect. Our experience suggests that the CAD/CAM technique provides an accurate and useful treatment not only in complex cases, but also in simpler ones, to achieve an anatomically correct shape of the bone transplant and to reposition adjacent structures.

  1. Investigation of safety analysis methods using computer vision techniques (United States)

    Shirazi, Mohammad Shokrolah; Morris, Brendan Tran


    This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.

  2. Techniques for Engaging Students in an Online Computer Programming Course

    Directory of Open Access Journals (Sweden)

    Eman M. El-Sheikh


    Full Text Available Many institutions of higher education are significantly expanding their online program and course offerings to deal with the rapidly increasing demand for flexible educational alternatives. One of the main challenges that faculty who teach online courses face is determining how to engage students in an online environment. Teaching computer programming effectively requires demonstration of programming techniques, examples, and environments, and interaction with the students, making online delivery even more challenging. This paper describes efforts to engage students in an online introductory programming course at our institution. The tools and methods used to promote student engagement in the course are described, in addition to the lessons learned from the design and delivery of the online course and opportunities for future work.

  3. Scale Reduction Techniques for Computing Maximum Induced Bicliques

    Directory of Open Access Journals (Sweden)

    Shahram Shahinpour


    Full Text Available Given a simple, undirected graph G, a biclique is a subset of vertices inducing a complete bipartite subgraph in G. In this paper, we consider two associated optimization problems, the maximum biclique problem, which asks for a biclique of the maximum cardinality in the graph, and the maximum edge biclique problem, aiming to find a biclique with the maximum number of edges in the graph. These NP-hard problems find applications in biclustering-type tasks arising in complex network analysis. Real-life instances of these problems often involve massive, but sparse networks. We develop exact approaches for detecting optimal bicliques in large-scale graphs that combine effective scale reduction techniques with integer programming methodology. Results of computational experiments with numerous real-life network instances demonstrate the performance of the proposed approach.

  4. Computer vision techniques for rotorcraft low-altitude flight (United States)

    Sridhar, Banavar; Cheng, Victor H. L.


    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  5. Evolutionary Computing Based Area Integration PWM Technique for Multilevel Inverters

    Directory of Open Access Journals (Sweden)

    S. Jeevananthan


    Full Text Available The existing multilevel carrier-based pulse width modulation (PWM strategies have no special provisions to offer quality output, besides lower order harmonics are introduced in the spectrum, especially at low switching frequencies. This paper proposes a novel multilevel PWM strategy to corner the advantages of low frequency switching and reduced total harmonic distortion (THD. The basic idea of the proposed area integration PWM (AIPWM method is that the area of the required sinusoidal (fundamental output and the total area of the output pulses are made equal. An attempt is made to incorporate two soft computing techniques namely evolutionary programming (EP and genetic algorithm (GA in the generation and placement of switching pulses. The results of a prototype seven-level cascaded inverter experimented with the novel PWM strategies are presented.

  6. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. (United States)

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen


    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.

  7. Computational Swarming: A Cultural Technique for Generative Architecture

    Directory of Open Access Journals (Sweden)

    Sebastian Vehlken


    Full Text Available After a first wave of digital architecture in the 1990s, the last decade saw some approaches where agent-based modelling and simulation (ABM was used for generative strategies in architectural design. By taking advantage of the self-organisational capabilities of computational agent collectives whose global behaviour emerges from the local interaction of a large number of relatively simple individuals (as it does, for instance, in animal swarms, architects are able to understand buildings and urbanscapes in a novel way as complex spaces that are constituted by the movement of multiple material and informational elements. As a major, zoo-technological branch of ABM, Computational Swarm Intelligence (SI coalesces all kinds of architectural elements – materials, people, environmental forces, traffic dynamics, etc. – into a collective population. Thereby, SI and ABM initiate a shift from geometric or parametric planning to time-based and less prescriptive software tools.Agent-based applications of this sort are used to model solution strategies in a number of areas where opaque and complex problems present themselves – from epidemiology to logistics, and from market simulations to crowd control. This article seeks to conceptualise SI and ABM as a fundamental and novel cultural technique for governing dynamic processes, taking their employment in generative architectural design as a concrete example. In order to avoid a rather conventional application of philosophical theories to this field, the paper explores how the procedures of such technologies can be understood in relation to the media-historical concept of Cultural Techniques.

  8. An experimental modal testing/identification technique for personal computers (United States)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph


    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  9. Analysis of Classical Encryption Techniques in Cloud Computing

    National Research Council Canada - National Science Library

    Muhammad Yasir Shabir Asif Iqbal Zahid Mahmood Ata Ullah Ghafoor


    Cloud computing has become a significant computing model in the IT industry. In this emerging model,computing resources such as software, hardware, networking, and storage can be accessed anywhere in the world on a pay-per-use basis...

  10. Equilibrium Arrival Times to Queues

    DEFF Research Database (Denmark)

    Breinbjerg, Jesper; Østerdal, Lars Peter

    a symmetric (mixed) Nash equilibrium, and show that there is at most one symmetric equilibrium. We provide a numerical method to compute this equilibrium and demonstrate by a numerical example that the social effciency can be lower than the effciency induced by a similar queueing system that serves customers...

  11. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL


    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  12. Computation of threshold conditions for epidemiological models and global stability of the disease-free equilibrium (DFE). (United States)

    Kamgang, Jean Claude; Sallet, Gauthier


    One goal of this paper is to give an algorithm for computing a threshold condition for epidemiological systems arising from compartmental deterministic modeling. We calculate a threshold condition T(0) of the parameters of the system such that if T(0)1, the DFE is unstable. The second objective, by adding some reasonable assumptions, is to give, depending on the model, necessary and sufficient conditions for global asymptotic stability (GAS) of the DFE. In many cases, we can prove that a necessary and sufficient condition for the global asymptotic stability of the DFE is R(0)Epidemiology of Infectious Diseases: Model Building, Analysis and Interpretation, Wiley, New York, 2000]. To illustrate our results, we apply our techniques to examples taken from the literature. In these examples we improve the results already obtained for the GAS of the DFE. We show that our algorithm is relevant for high dimensional epidemiological models.

  13. Computer technique for simulating the combustion of cellulose and other fuels (United States)

    Andrew M. Stein; Brian W. Bauske


    A computer method has been developed for simulating the combustion of wood and other cellulosic fuels. The products of combustion are used as input for a convection model that slimulates real fires. The method allows the chemical process to proceed to equilibrium and then examines the effects of mass addition and repartitioning on the fluid mechanics of the convection...

  14. Multidetector computed tomography triphasic evaluation of the liver before transplantation: importance of equilibrium phase washout and morphology for characterizing hypervascular lesions. (United States)

    Liu, Yueyi I; Kamaya, Aya; Jeffrey, R Brooke; Shin, Lewis K


    We aim to identify the sensitivity and positive predictive value (PPV) of arterial phase imaging in detecting hepatocellular carcinoma (HCC) and determine the added value of portal venous and equilibrium phase imaging and lesion morphology characterization. We reviewed all patients who underwent liver transplantation at our institution that had a triphasic multidetector computed tomography examination within 6 months of transplantation. Forty-seven hypervascular lesions were identified in 24 patients. Imaging findings were correlated with explant pathologic correlation. Hypervascularity in the arterial phase resulted in sensitivity of 87.5% and PPV of 29.8%. The presence of washout in the equilibrium phase increased the PPV to 92.9% with a slight decrease in sensitivity (81.3%). The negative predictive value of hypervascular lesions without washout in the equilibrium phase was 97.1%. There was significant correlation between larger lesions and HCC and between round lesions and HCC. The presence of washout in the equilibrium phase is a better indicator of malignancy.


    Directory of Open Access Journals (Sweden)

    Silvely S. Néia

    Full Text Available ABSTRACT The Vehicle Routing Problem (VRP is a classical problem, and when the number of customers is very large, the task of finding the optimal solution can be extremely complex. It is still necessary to find an effective way to evaluate the quality of solutions when there is no known optimal solution. This work presents a suggestion to analyze the quality of vehicle routes, based only on their geometric properties. The proposed descriptors aim to be invariants in relation to the amount of customers, vehicles and the size of the covered area. Applying the methodology proposed in this work it is possible to obtain the route and, then, to evaluate the quality of solutions obtained using computer vision. Despite considering problems with different configurations for the number of customers, vehicles and service area, the results obtained with the experiments show that the proposal is useful for classifying the routes into good or bad classes. A visual analysis was performed using the Parallel Coordinates and Viz3D techniques and then a classification was performed by a Backpropagation Neural Network, which indicated an accuracy rate of 99.87%.

  16. Cardiac Computed Tomography Radiomics: A Comprehensive Review on Radiomic Techniques. (United States)

    Kolossváry, Márton; Kellermayer, Miklós; Merkely, Béla; Maurovich-Horvat, Pál


    Radiologic images are vast three-dimensional data sets in which each voxel of the underlying volume represents distinct physical measurements of a tissue-dependent characteristic. Advances in technology allow radiologists to image pathologies with unforeseen detail, thereby further increasing the amount of information to be processed. Even though the imaging modalities have advanced greatly, our interpretation of the images has remained essentially unchanged for decades. We have arrived in the era of precision medicine where even slight differences in disease manifestation are seen as potential target points for new intervention strategies. There is a pressing need to improve and expand the interpretation of radiologic images if we wish to keep up with the progress in other diagnostic areas. Radiomics is the process of extracting numerous quantitative features from a given region of interest to create large data sets in which each abnormality is described by hundreds of parameters. From these parameters datamining is used to explore and establish new, meaningful correlations between the variables and the clinical data. Predictive models can be built on the basis of the results, which may broaden our knowledge of diseases and assist clinical decision making. Radiomics is a complex subject that involves the interaction of different disciplines; our objective is to explain commonly used radiomic techniques and review current applications in cardiac computed tomography imaging.


    Directory of Open Access Journals (Sweden)

    Dalibor Petković


    Full Text Available The paper investigates the accuracy of an adaptive neuro-fuzzy computing technique in precipitation estimation. The monthly precipitation data from 29 synoptic stations in Serbia during 1946-2012 are used as case studies. Even though a number of mathematical functions have been proposed for modeling the precipitation estimation, these models still suffer from the disadvantages such as their being very demanding in terms of calculation time. Artificial neural network (ANN can be used as an alternative to the analytical approach since it offers advantages such as no required knowledge of internal system parameters, compact solution for multi-variable problems and fast calculation. Due to its being a crucial problem, this paper presents a process constructed so as to simulate precipitation with an adaptive neuro-fuzzy inference (ANFIS method. ANFIS is a specific type of the ANN family and shows very good learning and prediction capabilities, which makes it an efficient tool for dealing with encountered uncertainties in any system such as precipitation. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system (FIS. This intelligent algorithm is implemented using Matlab/Simulink and the performances are investigated.  The simulation results presented in this paper show the effectiveness of the developed method.

  18. A computer program for two-dimensional and axisymmetric nonreacting perfect gas and equilibrium chemically reacting laminar, transitional and-or turbulent boundary layer flows (United States)

    Miner, E. W.; Anderson, E. C.; Lewis, C. H.


    A computer program is described in detail for laminar, transitional, and/or turbulent boundary-layer flows of non-reacting (perfect gas) and reacting gas mixtures in chemical equilibrium. An implicit finite difference scheme was developed for both two dimensional and axisymmetric flows over bodies, and in rocket nozzles and hypervelocity wind tunnel nozzles. The program, program subroutines, variables, and input and output data are described. Also included is the output from a sample calculation of fully developed turbulent, perfect gas flow over a flat plate. Input data coding forms and a FORTRAN source listing of the program are included. A method is discussed for obtaining thermodynamic and transport property data which are required to perform boundary-layer calculations for reacting gases in chemical equilibrium.

  19. Equilibrium and Termination

    Directory of Open Access Journals (Sweden)

    Nicolas Oury


    Full Text Available We present a reduction of the termination problem for a Turing machine (in the simplified form of the Post correspondence problem to the problem of determining whether a continuous-time Markov chain presented as a set of Kappa graph-rewriting rules has an equilibrium. It follows that the problem of whether a computable CTMC is dissipative (ie does not have an equilibrium is undecidable.

  20. Hybrid computer technique yields random signal probability distributions (United States)

    Cameron, W. D.


    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  1. Comparison of T1 mapping techniques for ECV quantification. Histological validation and reproducibility of ShMOLLI versus multibreath-hold T1 quantification equilibrium contrast CMR. (United States)

    Fontana, Marianna; White, Steve K; Banypersad, Sanjay M; Sado, Daniel M; Maestrini, Viviana; Flett, Andrew S; Piechnik, Stefan K; Neubauer, Stefan; Roberts, Neil; Moon, James C


    Myocardial extracellular volume (ECV) is elevated in fibrosis or infiltration and can be quantified by measuring the haematocrit with pre and post contrast T1 at sufficient contrast equilibrium. Equilibrium CMR (EQ-CMR), using a bolus-infusion protocol, has been shown to provide robust measurements of ECV using a multibreath-hold T1 pulse sequence. Newer, faster sequences for T1 mapping promise whole heart coverage and improved clinical utility, but have not been validated. Multibreathhold T1 quantification with heart rate correction and single breath-hold T1 mapping using Shortened Modified Look-Locker Inversion recovery (ShMOLLI) were used in equilibrium contrast CMR to generate ECV values and compared in 3 ways.Firstly, both techniques were compared in a spectrum of disease with variable ECV expansion (n=100, 50 healthy volunteers, 12 patients with hypertrophic cardiomyopathy, 18 with severe aortic stenosis, 20 with amyloid). Secondly, both techniques were correlated to human histological collagen volume fraction (CVF%, n=18, severe aortic stenosis biopsies). Thirdly, an assessment of test:retest reproducibility of the 2 CMR techniques was performed 1 week apart in individuals with widely different ECVs (n=10 healthy volunteers, n=7 amyloid patients). More patients were able to perform ShMOLLI than the multibreath-hold technique (6% unable to breath-hold). ECV calculated by multibreath-hold T1 and ShMOLLI showed strong correlation (r(2)=0.892), little bias (bias -2.2%, 95%CI -8.9% to 4.6%) and good agreement (ICC 0.922, range 0.802 to 0.961, pECV correlated with histological CVF% by multibreath-hold ECV (r(2)= 0.589) but better by ShMOLLI ECV (r(2)= 0.685). Inter-study reproducibility demonstrated that ShMOLLI ECV trended towards greater reproducibility than the multibreath-hold ECV, although this did not reach statistical significance (95%CI -4.9% to 5.4% versus 95%CI -6.4% to 7.3% respectively, p=0.21). ECV quantification by single breath-hold ShMOLLI T1

  2. Comparison of T1 mapping techniques for ECV quantification. Histological validation and reproducibility of ShMOLLI versus multibreath-hold T1 quantification equilibrium contrast CMR (United States)


    Background Myocardial extracellular volume (ECV) is elevated in fibrosis or infiltration and can be quantified by measuring the haematocrit with pre and post contrast T1 at sufficient contrast equilibrium. Equilibrium CMR (EQ-CMR), using a bolus-infusion protocol, has been shown to provide robust measurements of ECV using a multibreath-hold T1 pulse sequence. Newer, faster sequences for T1 mapping promise whole heart coverage and improved clinical utility, but have not been validated. Methods Multibreathhold T1 quantification with heart rate correction and single breath-hold T1 mapping using Shortened Modified Look-Locker Inversion recovery (ShMOLLI) were used in equilibrium contrast CMR to generate ECV values and compared in 3 ways. Firstly, both techniques were compared in a spectrum of disease with variable ECV expansion (n=100, 50 healthy volunteers, 12 patients with hypertrophic cardiomyopathy, 18 with severe aortic stenosis, 20 with amyloid). Secondly, both techniques were correlated to human histological collagen volume fraction (CVF%, n=18, severe aortic stenosis biopsies). Thirdly, an assessment of test:retest reproducibility of the 2 CMR techniques was performed 1 week apart in individuals with widely different ECVs (n=10 healthy volunteers, n=7 amyloid patients). Results More patients were able to perform ShMOLLI than the multibreath-hold technique (6% unable to breath-hold). ECV calculated by multibreath-hold T1 and ShMOLLI showed strong correlation (r2=0.892), little bias (bias -2.2%, 95%CI -8.9% to 4.6%) and good agreement (ICC 0.922, range 0.802 to 0.961, pECV correlated with histological CVF% by multibreath-hold ECV (r2= 0.589) but better by ShMOLLI ECV (r2= 0.685). Inter-study reproducibility demonstrated that ShMOLLI ECV trended towards greater reproducibility than the multibreath-hold ECV, although this did not reach statistical significance (95%CI -4.9% to 5.4% versus 95%CI -6.4% to 7.3% respectively, p=0.21). Conclusions ECV quantification by

  3. A review of computer-aided design/computer-aided manufacture techniques for removable denture fabrication. (United States)

    Bilgin, Mehmet Selim; Baytaroğlu, Ebru Nur; Erdem, Ali; Dilber, Erhan


    The aim of this review was to investigate usage of computer-aided design/computer-aided manufacture (CAD/CAM) such as milling and rapid prototyping (RP) technologies for removable denture fabrication. An electronic search was conducted in the PubMed/MEDLINE, ScienceDirect, Google Scholar, and Web of Science databases. Databases were searched from 1987 to 2014. The search was performed using a variety of keywords including CAD/CAM, complete/partial dentures, RP, rapid manufacturing, digitally designed, milled, computerized, and machined. The identified developments (in chronological order), techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication are summarized. Using a variety of keywords and aiming to find the topic, 78 publications were initially searched. For the main topic, the abstract of these 78 articles were scanned, and 52 publications were selected for reading in detail. Full-text of these articles was gained and searched in detail. Totally, 40 articles that discussed the techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication and the articles were incorporated in this review. Totally, 16 of the papers summarized in the table. Following review of all relevant publications, it can be concluded that current innovations and technological developments of CAD/CAM and RP allow the digitally planning and manufacturing of removable dentures from start to finish. As a result according to the literature review CAD/CAM techniques and supportive maxillomandibular relationship transfer devices are growing fast. In the close future, fabricating removable dentures will become medical informatics instead of needing a technical staff and procedures. However the methods have several limitations for now.

  4. Toward a Practical Technique to Halt Multiple Virus Outbreaks on Computer Networks


    Hole, Kjell Jørgen


    The author analyzes a technique to prevent multiple simultaneous virus epidemics on any vulnerable computer network with inhomogeneous topology. The technique immunizes a small fraction of the computers and utilizes diverse software platforms to halt the virus outbreaks. The halting technique is of practical interest since a network's detailed topology need not be known.

  5. Computer vision techniques for rotorcraft low altitude flight (United States)

    Sridhar, Banavar


    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  6. [Development of computer aided forming techniques in manufacturing scaffolds for bone tissue engineering]. (United States)

    Wei, Xuelei; Dong, Fuhui


    To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.

  7. The Inverted-Triangle Technique Of Converting The Computer ...

    African Journals Online (AJOL)

    Number system is the method of writing numerals to represent values. It is an integral part of computer science. It is therefore, necessary to provide computer scientists with a good understanding of number system concepts. Several methods exist in converting from one number system to another. In this paper, we introduce ...

  8. The equilibrium size distribution of rouleaux.


    Perelson, A S; Wiegel, F.W.


    Rouleaux are formed by the aggregation of red blood cells in the presence of macromolecules that bridge the membranes of adherent erythrocytes. We compute the size and degree of branching of rouleaux for macroscopic systems in thermal equilibrium in the absence of fluid flow. Using techniques from statistical mechanics, analytical expressions are derived for (a) the average number of rouleaux consisting of n cells and having m branch points; (b) the average number of cells per rouleau; (c) th...

  9. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard


    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  10. Generalized Boltzmann solution for non-equilibrium flows and the computation of flowfields of binary gas mixture

    Directory of Open Access Journals (Sweden)

    Baoguo Wang


    Full Text Available Hypersonic flows about space vehicles produce flowfields in thermodynamic non-equilibrium with the local Knudsen numbers Kn which may lie in all the three regimes: continuum, transition and rarefied. Continuum flows can be modeled accurately by solving the Navier–Stokes (NS equations; however, the flows in transition and rarefied regimes require a kinetic approach such as the direct simulation Monte Carlo (DSMC method or the solution of the Boltzmann equation. The Boltzmann equation and the general solution approach, using the splitting method, will be introduced in this paper. Details of the method used for solving both the classical Boltzmann equation (CBE and the generalized Boltzmann equation (GBE are also provided. The gas mixture discussed in this paper may consist of both monoatomic and diatomic gases. In particular, the method is applied to simulate two of the three primary constituents of air (N2, O2, and Ar in a binary mixture at 1:1 density ratio at Mach 2 and 5, with gases in translational, rotational and vibrational non-equilibrium.

  11. Seismic activity prediction using computational intelligence techniques in northern Pakistan (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat


    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  12. Seismic activity prediction using computational intelligence techniques in northern Pakistan (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat


    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  13. Efficient computation of past global ocean circulation patterns using continuation in paleobathymetry

    NARCIS (Netherlands)

    Mulder, T. E.|info:eu-repo/dai/nl/413306194; Baatsen, M. L.J.|info:eu-repo/dai/nl/407638296; Wubs, F.W.; Dijkstra, H. A.|info:eu-repo/dai/nl/073504467


    In the field of paleoceanographic modeling, the different positioning of Earth's continental configurations is often a major challenge for obtaining equilibrium ocean flow solutions. In this paper, we introduce numerical parameter continuation techniques to compute equilibrium solutions of ocean

  14. Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces. (United States)

    Spezia, Riccardo; Martínez-Nuñez, Emilio; Vazquez, Saulo; Hase, William L


    In this Introduction, we show the basic problems of non-statistical and non-equilibrium phenomena related to the papers collected in this themed issue. Over the past few years, significant advances in both computing power and development of theories have allowed the study of larger systems, increasing the time length of simulations and improving the quality of potential energy surfaces. In particular, the possibility of using quantum chemistry to calculate energies and forces 'on the fly' has paved the way to directly study chemical reactions. This has provided a valuable tool to explore molecular mechanisms at given temperatures and energies and to see whether these reactive trajectories follow statistical laws and/or minimum energy pathways. This themed issue collects different aspects of the problem and gives an overview of recent works and developments in different contexts, from the gas phase to the condensed phase to excited states.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).

  15. An Interaction of Economy and Environment in Dynamic Computable General Equilibrium Modelling with a Focus on Climate Change Issues in Korea : A Proto-type Model

    Energy Technology Data Exchange (ETDEWEB)

    Joh, Seung Hun; Dellink, Rob; Nam, Yunmi; Kim, Yong Gun; Song, Yang Hoon [Korea Environment Institute, Seoul (Korea)


    In the beginning of the 21st century, climate change is one of hottest issues in arena of both international environment and domestic one. During the COP6 meeting held in The Hague, over 10,000 people got together from the world. This report is a series of policy study on climate change in context of Korea. This study addresses on interactions of economy and environment in a perfect foresight dynamic computable general equilibrium with a focus on greenhouse gas mitigation strategy in Korea. The primary goal of this study is to evaluate greenhouse gas mitigation portfolios of changes in timing and magnitude with a particular focus on developing a methodology to integrate the bottom-up information on technical measures to reduce pollution into a top-down multi-sectoral computable general equilibrium framework. As a non-Annex I country Korea has been under strong pressure to declare GHG reduction commitment. Of particular concern is economic consequences GHG mitigation would accrue to the society. Various economic assessment have been carried out to address on the issue including analyses on cost, ancillary benefit, emission trading, so far. In this vein, this study on GHG mitigation commitment is a timely answer to climate change policy field. Empirical results available next year would be highly demanded in the situation. 62 refs., 13 figs., 9 tabs.

  16. Can markets compute equilibria?

    CERN Document Server

    Monroe , Hunter K


    Recent turmoil in financial and commodities markets has renewed questions regarding how well markets discover equilibrium prices, particularly when those markets are highly complex. A relatively new critique questions whether markets can realistically find equilibrium prices if computers cannot. For instance, in a simple exchange economy with Leontief preferences, the time required to compute equilibrium prices using the fastest known techniques is an exponential function of the number of goods. Furthermore, no efficient technique for this problem exists if a famous mathematical conjecture is

  17. Computational techniques in tribology and material science at the atomic level (United States)

    Ferrante, J.; Bozzolo, G. H.


    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  18. Applications of NLP Techniques to Computer-Assisted Authoring of Test Items for Elementary Chinese (United States)

    Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun


    The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…

  19. A New Technique to Compute Long-Range Wakefields in Accelerating Structures

    CERN Document Server

    Raguin, J Y; Wuensch, Walter


    A new technique is proposed to compute the coupling impedances and the long-range wakefields based on a scattering-matrix formalism which relies heavily upon post-processed data from the commercial finite-element code HFSS. To illustrate the speed of this technique, the procedures to compute the long-range wakefields of conventional constant-impedance structures and of structures damped with waveguides are presented. The efficiency and accuracy of the technique is achieved because the characteristics of periodic structures can be computed using single-cell data. Damping and synchronism effects are determined from such a computation.

  20. Tensor network techniques for the computation of dynamical observables in one-dimensional quantum spin systems (United States)

    Müller-Hermes, Alexander; Cirac, J. Ignacio; Bañuls, Mari Carmen


    We analyze the recently developed folding algorithm (Bañuls et al 2009 Phys. Rev. Lett. 102 240603) for simulating the dynamics of infinite quantum spin chains and we relate its performance to the kind of entanglement produced under the evolution of product states. We benchmark the accomplishments of this technique with respect to alternative strategies using Ising Hamiltonians with transverse and parallel fields, as well as XY models. Also, we evaluate its capability of finding ground and thermal equilibrium states.

  1. a survey of computed tomography imaging techniques and patient

    African Journals Online (AJOL)


    Oct 10, 2010 ... parameters for head, chest, abdomen and pelvis adult examinations at each facility. Results: The radiation exposure from Computed ... aging and maintenance, not excluding the human factor. These factors have presented a ... development phase in Kenya. This study therefore forms the basis towards CT ...

  2. A survey of GPU-based medical image computing techniques (United States)

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming


    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  3. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications


    Sadik Kamel Gharghan; Rosdiadee Nordin; Mahamod Ismail


    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the...

  4. Local Stellarator Equilibrium Model (United States)

    Hudson, Stuart R.; Hegna, Chris C.; Lewandowski, Jerome W.


    Extensive calculations of ballooning and drift waves spectrums in asymmetric toroidal configurations (e.g. stellarators) to appreciate the role of magnetic geometry and profile variations are usually are usually prohibitive as the evaluation of the magneto-hydrodynamic (MHD) equilibrium is in itself a non-trivial problem. Although simple analytical MHD model equilibria do exist for tokamak configurations, their stellarator counterparts are usually crude or very approximate. In order to make more extensive stability calculations (of both ideal ballooning and drift-type modes), a technique for generating three-dimensional magneto-static equilibria, localized to a magnetic surface, has been developed. The technique allows one to easily manipulate various 3-D shaping and profile effects on a magnetic surface avoiding the need to recompute an entire three dimensional solution of the equilibrium. The model equilibrium has been implemented into existing ideal MHD ballooning and drift wave numerical codes. Marginal ballooning stability diagrams and drift wave calculations will be reported.

  5. Computer Tomography: A Novel Diagnostic Technique used in Horses

    African Journals Online (AJOL)

    CT scan was used in the diagnosis of two conditions of the head and limbs, namely alveolar periostitis and Navicular disease. The advantages of the technique are evident in the clarity with which the lesions are seen, as well as the precise identification of the affected tooth or bone. The Kenya Veterinarian Vol. 21 2001: pp.

  6. Multiplexing technique for computer communications via satellite channels (United States)

    Binder, R.


    Multiplexing scheme combines technique of dynamic allocation with conventional time-division multiplexing. Scheme is designed to expedite short-duration interactive or priority traffic and to delay large data transfers; as result, each node has effective capacity of almost total channel capacity when other nodes have light traffic loads.

  7. Optimizing Nuclear Reactor Operation Using Soft Computing Techniques

    NARCIS (Netherlands)

    Entzinger, J.O.; Ruan, D.; Kahraman, Cengiz


    The strict safety regulations for nuclear reactor control make it di±cult to implement new control techniques such as fuzzy logic control (FLC). FLC however, can provide very desirable advantages over classical control, like robustness, adaptation and the capability to include human experience into

  8. Blending Two Major Techniques in Order to Compute [Pi (United States)

    Guasti, M. Fernandez


    Three major techniques are employed to calculate [pi]. Namely, (i) the perimeter of polygons inscribed or circumscribed in a circle, (ii) calculus based methods using integral representations of inverse trigonometric functions, and (iii) modular identities derived from the transformation theory of elliptic integrals. This note presents a…

  9. Efficiently Computing Latency Distributions by combined Performance Evaluation Techniques

    NARCIS (Netherlands)

    van den Berg, Freek; Haverkort, Boudewijn R.H.M.; Hooman, Jozef; Knottenbelt, William; Wolter, Katinka; Busic, Ana; Gribaudo, Marco; Reinecke, Philipp


    Service-oriented systems are designed for interconnecting with other systems. The provided services face timing con- straints, the so-called latencies. We present a high-level per- formance evaluation technique that can be used by a system designer to obtain distributions of these latencies. This

  10. A Simple Generation Technique of Complex Geotechnical Computational Model

    Directory of Open Access Journals (Sweden)

    Hang Lin


    Full Text Available Given that FLAC3D (a geotechnical calculation software is difficult to use for building large, complex, and three-dimensional mining models, the current study proposes a fast and a convenient modeling technique that combines the unique advantages of FLAC3D in numerical calculation and those of SURPAC (a mine design software in three-dimensional modeling, and the interface program was compiled. First, the relationship between the FLAC3D and the SURPAC unit data was examined, and the transformation technique between the data was given. Then, the interface program that transforms the mine design model to calculate the model was compiled using FORTRAN, and the specific steps of the program implementation were described using a large iron and copper mine modeling example. The results show that the proposed transformation technique and its corresponding interface program transformed the SURPAC model into the FLAC3D model, which expedited FLAC3D modeling, verified the validity and feasibility of the transformation technique, and expanded the application spaces of FLAC3D and SURPAC.

  11. Novel Techniques for Secure Use of Public Cloud Computing Resources (United States)


    Service . . . . . . . . . . . . . . . . . . . . . . . 6 IaaS Infrastructure-as-a-Service . . . . . . . . . . . . . . . . . . . . . 6 DoD Department of...Infrastructure-as-a-Service ( IaaS ) The capability provided to the consumer is to pro- vision processing, storage, networks, and other fundamental computing...public CSP Amazon Web Services (AWS). So far, the focus appears to be on certifying Infrastructure-as-a-Service ( IaaS ) providers as all nine

  12. Computational techniques for the assessment of fracture repair. (United States)

    Anderson, Donald D; Thomas, Thaddeus P; Campos Marin, Ana; Elkins, Jacob M; Lack, William D; Lacroix, Damien


    The combination of high-resolution three-dimensional medical imaging, increased computing power, and modern computational methods provide unprecedented capabilities for assessing the repair and healing of fractured bone. Fracture healing is a natural process that restores the mechanical integrity of bone and is greatly influenced by the prevailing mechanical environment. Mechanobiological theories have been proposed to provide greater insight into the relationships between mechanics (stress and strain) and biology. Computational approaches for modelling these relationships have evolved from simple tools to analyze fracture healing at a single point in time to current models that capture complex biological events such as angiogenesis, stochasticity in cellular activities, and cell-phenotype specific activities. The predictive capacity of these models has been established using corroborating physical experiments. For clinical application, mechanobiological models accounting for patient-to-patient variability hold the potential to predict fracture healing and thereby help clinicians to customize treatment. Advanced imaging tools permit patient-specific geometries to be used in such models. Refining the models to study the strain fields within a fracture gap and adapting the models for case-specific simulation may provide more accurate examination of the relationship between strain and fracture healing in actual patients. Medical imaging systems have significantly advanced the capability for less invasive visualization of injured musculoskeletal tissues, but all too often the consideration of these rich datasets has stopped at the level of subjective observation. Computational image analysis methods have not yet been applied to study fracture healing, but two comparable challenges which have been addressed in this general area are the evaluation of fracture severity and of fracture-associated soft tissue injury. CT-based methodologies developed to assess and quantify

  13. Maximin equilibrium

    NARCIS (Netherlands)

    Ismail, M.S.


    We introduce a new concept which extends von Neumann and Morgenstern's maximin strategy solution by incorporating `individual rationality' of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that

  14. Applications of computational intelligence techniques for solving the revived optimal power flow problem

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R. [Electrical Engineering Department, College of Technological Studies, Shuwaikh (Kuwait); El-Hawary, M.E. [Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS B3J 2X4 (Canada)


    Computational intelligence tools are attracting added attention in different research areas and research in power systems is not different. This paper provides an overview of major computational issues with regard to the optimal power flow (OPF). Then, it offers a brief summary of major computational intelligence tools. A detailed coverage of most OPF related research work that make use of modern computational intelligence techniques is presented next. (author)

  15. Rugoscopy: Human identification by computer-assisted photographic superimposition technique


    Mohammed, Rezwana Begum; Patil, Rajendra G.; Pammi, V. R.; Sandya, M. Pavana; Kalyan, Siva V.; Anitha, A.


    Background: Human identification has been studied since fourteenth century and it has gradually advanced for forensic purposes. Traditional methods such as dental, fingerprint, and DNA comparisons are probably the most common techniques used in this context, allowing fast and secure identification processes. But, in circumstances where identification of an individual by fingerprint or dental record comparison is difficult, palatal rugae may be considered as an alternative source of material. ...

  16. Securing the Cloud Cloud Computer Security Techniques and Tactics

    CERN Document Server

    Winkler, Vic (JR)


    As companies turn to cloud computing technology to streamline and save money, security is a fundamental concern. Loss of certain control and lack of trust make this transition difficult unless you know how to handle it. Securing the Cloud discusses making the move to the cloud while securing your peice of it! The cloud offers felxibility, adaptability, scalability, and in the case of security-resilience. This book details the strengths and weaknesses of securing your company's information with different cloud approaches. Attacks can focus on your infrastructure, communications network, data, o

  17. Retrospective indexing (RI) - A computer-aided indexing technique (United States)

    Buchan, Ronald L.


    An account is given of a method for data base-updating designated 'computer-aided indexing' (CAI) which has been very efficiently implemented at NASA's Scientific and Technical Information Facility by means of retrospective indexing. Novel terms added to the NASA Thesaurus will therefore proceed directly into both the NASA-RECON aerospace information system and its portion of the ESA-Information Retrieval Service, giving users full access to material thus indexed. If a given term appears in the title of a record, it is given special weight. An illustrative graphic representation of the CAI search strategy is presented.

  18. Techniques for animation of CFD results. [computational fluid dynamics (United States)

    Horowitz, Jay; Hanson, Jeffery C.


    Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.

  19. Computer-assisted virtual autopsy using surgical navigation techniques. (United States)

    Ebert, Lars Christian; Ruder, Thomas D; Martinez, Rosa Maria; Flach, Patricia M; Schweitzer, Wolf; Thali, Michael J; Ampanozi, Garyfalia


    OBJECTIVE; Virtual autopsy methods, such as postmortem CT and MRI, are increasingly being used in forensic medicine. Forensic investigators with little to no training in diagnostic radiology and medical laypeople such as state's attorneys often find it difficult to understand the anatomic orientation of axial postmortem CT images. We present a computer-assisted system that permits postmortem CT datasets to be quickly and intuitively resliced in real time at the body to narrow the gap between radiologic imaging and autopsy. Our system is a potentially valuable tool for planning autopsies, showing findings to medical laypeople, and teaching CT anatomy, thus further closing the gap between radiology and forensic pathology.

  20. Computing distance-based topological descriptors of complex chemical networks: New theoretical techniques (United States)

    Hayat, Sakander


    Structure-based topological descriptors/indices of complex chemical networks enable prediction of physico-chemical properties and the bioactivities of these compounds through QSAR/QSPR methods. In this paper, we have developed a rigorous computational and theoretical technique to compute various distance-based topological indices of complex chemical networks. A fullerene is called the IPR (Isolated-Pentagon-Rule) fullerene, if every pentagon in it is surrounded by hexagons only. To ensure the applicability of our technique, we compute certain distance-based indices of an infinite family of IPR fullerenes. Our results show that the proposed technique is more diverse and bears less algorithmic and combinatorial complexity.

  1. A comparison between the determination of free Pb(II) by two techniques: absence of gradients and Nernstian equilibrium stripping and resin titration. (United States)

    Alberti, Giancarla; Biesuz, Raffaela; Huidobro, César; Companys, Encarnació; Puy, Jaume; Galceran, Josep


    Absence of gradients and Nernstian equilibrium stripping (AGNES) is an emerging electroanalytical technique designed to measure free metal ion concentration. The practical implementation of AGNES requires a critical selection of the deposition time, which can be drastically reduced if the contribution of the complexes is properly taken into account. The resin titration (RT) is a competition method based on the sorption of metal ions on a complexing resin. The competitor here considered is the resin Chelex 100 whose sorbing properties towards Pb(II) are well known. The RT is a consolidated technique especially suitable to perform an intercomparison with AGNES, due to its independent physicochemical nature. Two different ligands for Pb(II) complexation have been analyzed here: nitrilotriacetic acid (NTA) and pyridinedicarboxylic acid (PDCA). The complex PbNTA is practically inert in the diffusion layer, so, for ordinary deposition potentials, its contribution is almost negligible; however, at potentials more negative than -0.8 V vs. Ag/AgCl the complex dissociates on the electrodic surface giving rise to a second wave in techniques such as normal pulse polarography. The complex Pb-PDCA is partially labile, so that its contribution can be estimated from an expression of the lability degree of the complex. These new strategies allow us to reduce the deposition time. The free Pb(II) concentrations obtained by AGNES and by RT are in full agreement for both systems here considered. The main advantage of the use of AGNES in these systems lies in the reduction of the time of the experiment, while RT can be applied to non-amalgamating elements and offers the possibility of simultaneous determinations.

  2. Computer tomography as a diagnostic technique in psychiatry

    Energy Technology Data Exchange (ETDEWEB)

    Strobl, G.; Reisner, T.; Zeiler, K. (Vienna Univ. (Austria). Psychiatrische Klinik; Vienna Univ. (Austria). Neurologische Klinik)


    CT findings in 516 hospitalized psychiatric patients are presented. The patients were classified in 9 groups according to a modified ICD classification, and type and incidence of pathological findings - almost exclusively degenerative processes of the brain - were registered. Diffuse cerebral atrophies are most frequent in the groups alcoholism and alcohol psychoses (44.0%) and psychoses and mental disturbances accompanying physical diseases . In schizophrenics, (almost exclusively residual and defect states) and in patients with affective psychosis diffuse cerebral atrophies are much less frequent (11.3% and 9.2%) than stated in earlier publications. Neurosis, changes in personality, or abnormal behaviour are hardly ever accompanied by cerebral atrophy. Problems encountered in the attempt to establish objective criteria for a diagnosis of cerebral atrophy on the basis of CT pictures are discussed. The computed tomograph does not permit conclusions on the etiology of diffuse atrophic processes.


    Directory of Open Access Journals (Sweden)

    A. P. Tarasov


    Full Text Available Aim: To assess geometric parameters of the human head based on X-ray computed tomography for construction of the first Russian optical cerebral oxymeter.Materials and methods: Based on the data obtained by multidetector computed tomography, we retrospectively assessed thickness of the frontal bone squame, adjacent soft tissues and calculated their sum in 100 patients above 50 years of age (50 male and 50 female, mean age 64 ± 8 years. The supraorbital edge of the orbit and the middle line were chosen as the reference points.Results: The mean frontal squame thickness was6.28 mm (± 1.58 on the right side and6.38 mm (± 1.62 on the left side. The mean thickness of the soft tissues covering the bone at this level was4.39 mm (± 1.21 on the right side and4.41 mm (± 1.22 on the left side. The mean total thickness of the frontal squame bone and soft tissue was11.76 mm (± 2.25 on the right side and11.89 mm (± 2.31 on the left side.Conclusion: For reliable reproducibility of cerebral oxymetry, geometric characteristics of the area where the sensor will be placed, taking the supraorbital edge and the middle line as reference points. Minimal sums of the mean values and their standard deviations for the frontal bone thickness and soft tissues were measured at the intersection points of3 cm lines perpendicular to these reference points.

  4. Rugoscopy: Human identification by computer-assisted photographic superimposition technique. (United States)

    Mohammed, Rezwana Begum; Patil, Rajendra G; Pammi, V R; Sandya, M Pavana; Kalyan, Siva V; Anitha, A


    Human identification has been studied since fourteenth century and it has gradually advanced for forensic purposes. Traditional methods such as dental, fingerprint, and DNA comparisons are probably the most common techniques used in this context, allowing fast and secure identification processes. But, in circumstances where identification of an individual by fingerprint or dental record comparison is difficult, palatal rugae may be considered as an alternative source of material. The present study was done to evaluate the individualistic nature and use of palatal rugae patterns for personal identification and also to test the efficiency of computerized software for forensic identification by photographic superimposition of palatal photographs obtained from casts. Two sets of Alginate impressions were made from the upper arches of 100 individuals (50 males and 50 females) with one month interval in between and the casts were poured. All the teeth except the incisors were removed to ensure that only the palate could be used in identification process. In one set of the casts, the palatal rugae were highlighted with a graphite pencil. All the 200 casts were randomly numbered, and then, they were photographed with a 10.1 Mega Pixel Kodak digital camera using standardized method. Using computerized software, the digital photographs of the models without highlighting the palatal rugae were overlapped over the images (transparent) of the palatal rugae with highlighted palatal rugae, in order to identify the pairs by superimposition technique. Incisors were remained and used as landmarks to determine the magnification required to bring the two set of photographs to the same size, in order to make perfect superimposition of images. The result of the overlapping of the digital photographs of highlighted palatal rugae over normal set of models without highlighted palatal rugae resulted in 100% positive identification. This study showed that utilization of palatal photographs

  5. High-throughput computational and experimental techniques in structural genomics. (United States)

    Chance, Mark R; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa


    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell.

  6. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    National Research Council Canada - National Science Library

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod


    ...) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes...

  7. Computer-assisted Navigation in Bone Tumor Surgery: Seamless Workflow Model and Evolution of Technique

    National Research Council Canada - National Science Library

    So, Timothy Y. C; Lam, Ying-Lee; Mak, Ka-Lok


    .... Registration techniques vary, although most existing systems use some form of surface matching.We developed and evaluated a workflow model of computer-assisted bone tumor surgery and evaluated (1...

  8. Assessment of health and economic effects by PM2.5 pollution in Beijing: a combined exposure-response and computable general equilibrium analysis. (United States)

    Wang, Guizhi; Gu, SaiJu; Chen, Jibo; Wu, Xianhua; Yu, Jun


    Assessment of the health and economic impacts of PM2.5 pollution is of great importance for urban air pollution prevention and control. In this study, we evaluate the damage of PM2.5 pollution using Beijing as an example. First, we use exposure-response functions to estimate the adverse health effects due to PM2.5 pollution. Then, the corresponding labour loss and excess medical expenditure are computed as two conducting variables. Finally, different from the conventional valuation methods, this paper introduces the two conducting variables into the computable general equilibrium (CGE) model to assess the impacts on sectors and the whole economic system caused by PM2.5 pollution. The results show that, substantial health effects of the residents in Beijing from PM2.5 pollution occurred in 2013, including 20,043 premature deaths and about one million other related medical cases. Correspondingly, using the 2010 social accounting data, Beijing gross domestic product loss due to the health impact of PM2.5 pollution is estimated as 1286.97 (95% CI: 488.58-1936.33) million RMB. This demonstrates that PM2.5 pollution not only has adverse health effects, but also brings huge economic loss.

  9. Computer program of data reduction procedures for facilities using CO2-N2-O2-Ar equilibrium real-gas mixtures (United States)

    Miller, C. G., III


    Data reduction procedures for determining free-stream and post-normal-shock flow conditions are presented. These procedures are applicable to flows of CO2, N2, O2, Ar, or mixtures of these gases and include the effects of dissociation and ionization. The assumption of thermochemical equilibrium free-stream and post-normal-shock flow is made. Although derived primarily to meet the immediate needs of an expansion tube of a hot gas radiation research facility, these procedures are applicable to any supersonic or hypersonic test facility using these gases or mixtures thereof. The data reduction procedures are based on combinations of three of the following flow parameters measured in the immediate vicinity of the test section: (1) stagnation pressure behind normal shock, (2) free-stream static pressure, (3) stagnation-point heat-transfer rate, (4) free-stream velocity, and (5) free-stream density. Thus, these procedures do not depend explicitly upon measured or calculated upstream flow parameters. The procedures are incorporated into a single computer program written in FORTRAN IV language. A listing of this computer program is presented, along with a description of the inputs required and a sample of the data printout.

  10. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres (United States)

    Min, M.


    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  11. Integration of Three New Teaching Techniques in an Introductory Computer Course. (United States)

    Dunn, Walter L.

    Three new teaching techniques, using established principles of learning, were combined to teach an introductory digital computer course to college students. The techniques were: 1) programed instruction; 2) Fields-type teaching tests, "a discrimination method to teach concepts by modifying the examination procedure to emphasize similarities…

  12. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 1: Computational technique (United States)

    Marconi, F.; Salas, M.; Yaeger, L.


    A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  13. The Application of Special Computing Techniques to Speed-Up Image Feature Extraction and Processing Techniques. (United States)


    PAMI-3, no. 3, May, 1981. 16. L.G. Shapiro, "A Structural Model of Shape," IEEE Trans. on Pattern AnalIsis and Machine Intelligence, vol. PAMI-2, no...task, step, and data set element levels A key to classifying such structures is the interconnection subsystem- its topology and operations. Three...must be capable of resolving conflict situations. The essential structure of a multi-processor system consists of a host computer such as a PDP 11/70

  14. [Clinical analysis of 12 cases of orthognathic surgery with digital computer-assisted technique]. (United States)

    Tan, Xin-ying; Hu, Min; Liu, Chang-kui; Liu, Hua-wei; Liu, San-xia; Tao, Ye


    This study was to investigate the effect of the digital computer-assisted technique in orthognathic surgery. Twelve patients from January 2008 to December 2011 with jaw malformation were treated in our department. With the help of CT and three-dimensional reconstruction technique, 12 patients underwent surgical treatment and the results were evaluated after surgery. Digital computer-assisted technique could clearly show the status of the jaw deformity and assist virtual surgery. After surgery all patients were satisfied with the results. Digital orthognathic surgery can improve the predictability of the surgical procedure, and to facilitate patients' communication, shorten operative time, and reduce patients' pain.

  15. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin


    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  16. Equilibrium thermodynamics

    CERN Document Server

    Oliveira, Mário J


    This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions.  These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This textbo...

  17. Equilibrium thermodynamics

    CERN Document Server

    de Oliveira, Mário J


    This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions. These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This new edit...

  18. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer. (United States)

    Hu, Yujing; Gao, Yang; An, Bo


    An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.

  19. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) (United States)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro


    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  20. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics. (United States)

    Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V


    The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.

  1. Computation of quasi-periodic tori and heteroclinic connections in astrodynamics using collocation techniques (United States)

    Olikara, Zubin P.

    Many astrodynamical systems exhibit both ordered and chaotic motion. The invariant manifold structure organizes these behaviors and is a valuable tool for the design of spacecraft trajectories. The study of a system's dynamics often begins with the computation of its invariant tori (equilibrium points, periodic orbits, quasi-periodic orbits) and associated stable and unstable manifolds. Periodic orbits, in particular, have been used effectively for the design of low-energy transfers in the circular restricted 3-body problem (CR3BP). Quasi-periodic orbits offer similar benefits and are often more prevalent in the phase space, but additional complexities are involved in their computation. The foundation of this work is the development of a numerical method for computing two-dimensional quasi-periodic tori. The approach is applicable to a general class of Hamiltonian systems. Using a Fourier discretization and Gauss-Legendre collocation, a continuous representation of the torus is obtained. Included in the scheme is the computation of the torus's stable and unstable manifolds. These manifolds can then be used for the design of natural transfers. Two methods are presented for locating and continuing families of heteroclinic connections between quasi-periodic orbits in the CR3BP. A collocation-based approach for transitioning trajectories to a higher-fidelity ephemeris model is also included.

  2. State-of-the-art soft computing techniques in image steganography domain (United States)

    Hussain, Hanizan Shaker; Din, Roshidi; Samad, Hafiza Abdul; Yaacub, Mohd Hanafizah; Murad, Roslinda; Rukhiyah, A.; Sabdri, Noor Maizatulshima


    This paper reviews major works of soft computing (SC) techniques in image steganography and watermarking in the last ten years, focusing on three main SC techniques, which are neural network, genetic algorithm, and fuzzy logic. The findings suggests that all these works applied SC techniques either during pre-processing, embedding or extracting stages or more than one of these stages. Therefore, the presence of SC techniques with their diverse approaches and strengths can help researchers in future work to attain excellent quality of image information hiding that comprises both imperceptibility and robustness.

  3. Macroeconomic impact of a mild influenza pandemic and associated policies in Thailand, South Africa and Uganda: a computable general equilibrium analysis. (United States)

    Smith, Richard D; Keogh-Brown, Marcus R


    Previous research has demonstrated the value of macroeconomic analysis of the impact of influenza pandemics. However, previous modelling applications focus on high-income countries and there is a lack of evidence concerning the potential impact of an influenza pandemic on lower- and middle-income countries. To estimate the macroeconomic impact of pandemic influenza in Thailand, South Africa and Uganda with particular reference to pandemic (H1N1) 2009. A single-country whole-economy computable general equilibrium (CGE) model was set up for each of the three countries in question and used to estimate the economic impact of declines in labour attributable to morbidity, mortality and school closure. Overall GDP impacts were less than 1% of GDP for all countries and scenarios. Uganda's losses were proportionally larger than those of Thailand and South Africa. Labour-intensive sectors suffer the largest losses. The economic cost of unavoidable absence in the event of an influenza pandemic could be proportionally larger for low-income countries. The cost of mild pandemics, such as pandemic (H1N1) 2009, appears to be small, but could increase for more severe pandemics and/or pandemics with greater behavioural change and avoidable absence. © 2013 John Wiley & Sons Ltd.

  4. Non-equilibrium phase transitions

    CERN Document Server

    Henkel, Malte; Lübeck, Sven


    This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.

  5. A Directional Stroke Recognition Technique for Mobile Interaction in a Pervasive Computing World


    Kostakos, Vassilis; O'Neill, Eamonn


    This paper presents a common gestural interface to mobile and pervasive computing devices. We report our development of a novel technique for recognizing input strokes on a range of mobile and pervasive devices, ranging from small devices with low processing capabilities and limited input area to computers with wall-sized displays and an input area as large as can be accommodated by motion-sensing technologies such as cameras. Recent work has included implementing and testing our stroke recog...

  6. Evolution of Computer Virus Concealment and Anti-Virus Techniques: A Short Survey


    Rad, Babak Bashari; Masrom, Maslin; Ibrahim, Suhaimi


    This paper presents a general overview on evolution of concealment methods in computer viruses and defensive techniques employed by anti-virus products. In order to stay far from the anti-virus scanners, computer viruses gradually improve their codes to make them invisible. On the other hand, anti-virus technologies continually follow the virus tricks and methodologies to overcome their threats. In this process, anti-virus experts design and develop new methodologies to make them stronger, mo...

  7. Computer Aided Measurement Laser (CAML): technique to quantify post-mastectomy lymphoedema (United States)

    Trombetta, Chiara; Abundo, Paolo; Felici, Antonella; Ljoka, Concetta; Di Cori, Sandro; Rosato, Nicola; Foti, Calogero


    Lymphoedema can be a side effect of cancer treatment. Eventhough several methods for assessing lymphoedema are used in clinical practice, an objective quantification of lymphoedema has been problematic. The aim of the study was to determine the objectivity, reliability and repeatability of the computer aided measurement laser (CAML) technique. CAML technique is based on computer aided design (CAD) methods and requires an infrared laser scanner. Measurements are scanned and the information describing size and shape of the limb allows to design the model by using the CAD software. The objectivity and repeatability was established in the beginning using a phantom. Consequently a group of subjects presenting post-breast cancer lymphoedema was evaluated using as a control the contralateral limb. Results confirmed that in clinical settings CAML technique is easy to perform, rapid and provides meaningful data for assessing lymphoedema. Future research will include a comparison of upper limb CAML technique between healthy subjects and patients with known lymphoedema.

  8. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013) (United States)

    Wang, Jianxiong


    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  9. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques. (United States)

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M


    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  10. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N


    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  11. Problems in equilibrium theory

    CERN Document Server

    Aliprantis, Charalambos D


    In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.

  12. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014) (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.


    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  13. Prediction of scour caused by 2D horizontal jets using soft computing techniques

    Directory of Open Access Journals (Sweden)

    Masoud Karbasi


    Full Text Available This paper presents application of five soft-computing techniques, artificial neural networks, support vector regression, gene expression programming, grouping method of data handling (GMDH neural network and adaptive-network-based fuzzy inference system, to predict maximum scour hole depth downstream of a sluice gate. The input parameters affecting the scour depth are the sediment size and its gradation, apron length, sluice gate opening, jet Froude number and the tail water depth. Six non-dimensional parameters were achieved to define a functional relationship between the input and output variables. Published data were used from the experimental researches. The results of soft-computing techniques were compared with empirical and regression based equations. The results obtained from the soft-computing techniques are superior to those of empirical and regression based equations. Comparison of soft-computing techniques showed that accuracy of the ANN model is higher than other models (RMSE = 0.869. A new GEP based equation was proposed.

  14. Evaluating a Computer-Assisted Pronunciation Training (CAPT) Technique for Efficient Classroom Instruction (United States)

    Luo, Beate


    This study investigates a computer-assisted pronunciation training (CAPT) technique that combines oral reading with peer review to improve pronunciation of Taiwanese English major students. In addition to traditional in-class instruction, students were given a short passage every week along with a recording of the respective text, read by a native…

  15. Top-down/bottom-up description of electricity sector for Switzerland using the GEM-E3 computable general equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Krakowski, R. A


    Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)

  16. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing. (United States)

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav


    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  17. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Directory of Open Access Journals (Sweden)

    Shafi'i Muhammad Abdulhamid

    Full Text Available Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA and Ant Colony Optimization (ACO scheduling techniques.

  18. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm. (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid


    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  19. Practical Applications of Evolutionary Computation to Financial Engineering Robust Techniques for Forecasting, Trading and Hedging

    CERN Document Server

    Iba, Hitoshi


    “Practical Applications of Evolutionary Computation to Financial Engineering” presents the state of the art techniques in Financial Engineering using recent results in Machine Learning and Evolutionary Computation. This book bridges the gap between academics in computer science and traders and explains the basic ideas of the proposed systems and the financial problems in ways that can be understood by readers without previous knowledge on either of the fields. To cement the ideas discussed in the book, software packages are offered that implement the systems described within. The book is structured so that each chapter can be read independently from the others. Chapters 1 and 2 describe evolutionary computation. The third chapter is an introduction to financial engineering problems for readers who are unfamiliar with this area. The following chapters each deal, in turn, with a different problem in the financial engineering field describing each problem in detail and focusing on solutions based on evolutio...

  20. Patents of bio-active compounds based on computer-aided drug discovery techniques. (United States)

    Prado-Prado, Francisco; Garcia-Mera, Xerardo; Rodriguez-Borges, Jose Enrique; Concu, Riccardo; Perez-Montoto, Lazaro Guillermo; Gonzalez-Diaz, Humberto; Duardo-Sanchez, Aliuska


    In recent times, there has been an increased use of Computer-Aided Drug Discovery (CADD) techniques in Medicinal Chemistry as auxiliary tools in drug discovery. Whilst the ultimate goal of Medicinal Chemistry research is for the discovery of new drug candidates, a secondary yet important outcome that results is in the creation of new computational tools. This process is often accompanied by a lack of understanding of the legal aspects related to software and model use, that is, the copyright protection of new medicinal chemistry software and software-mediated discovered products. In the center of picture, which lies in the frontiers of legal, chemistry, and biosciences, we found computational modeling-based drug discovery patents. This article aims to review prominent cases of patents of bio-active organic compounds that involved/protect also computational techniques. We put special emphasis on patents based on Quantitative Structure-Activity Relationships (QSAR) models but we include other techniques too. An overview of relevant international issues on drug patenting is also presented.

  1. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Hsuan-Ming Huang


    Full Text Available Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS- based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. Methods. First, total difference minimization (TDM was implemented using the soft-threshold filtering (STF. Second, we combined TDM-STF with the ordered subsets transmission (OSTR algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10% was minor as compared to the acceleration provided by the proposed method. Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  2. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim


    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  3. Sputter behaviour of vanadium silicide studied by a computer simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, B.; Zehe, A.; Samoylov, V.N.


    A detailed study of the sputter yield of VSi/sub 2/ has been carried out by a computer simulation technique. Krypton and neon ions in the energy range of 200 - 5000 eV are used, and the angle of incidence on the (0001) surface of VSi/sub 2/ is taken to be 90/sup 0/. The dynamical atom block considered in the computer program consists of 397 atoms situated at 5 equally spaced layers. Both amorphous and crystalline targets are discussed, having expectingly a different outcome for yield, as well as for preferential and directional sputtering. Theoretical results are in good accord with experimental findings.

  4. New phenomena in non-equilibrium quantum physics (United States)

    Kitagawa, Takuya

    From its beginning in the early 20th century, quantum theory has become progressively more important especially due to its contributions to the development of technologies. Quantum mechanics is crucial for current technology such as semiconductors, and also holds promise for future technologies such as superconductors and quantum computing. Despite of the success of quantum theory, its applications have been mostly limited to equilibrium or static systems due to 1. lack of experimental controllability of non-equilibrium quantum systems 2. lack of theoretical frameworks to understand non-equilibrium dynamics. Consequently, physicists have not yet discovered too many interesting phenomena in non-equilibrium quantum systems from both theoretical and experimental point of view and thus, non-equilibrium quantum physics did not attract too much attentions. The situation has recently changed due to the rapid development of experimental techniques in condensed matter as well as cold atom systems, which now enables a better control of non-equilibrium quantum systems. Motivated by this experimental progress, we constructed theoretical frameworks to study three different non-equilibrium regimes of transient dynamics, steady states and periodically drives. These frameworks provide new perspectives for dynamical quantum process, and help to discover new phenomena in these systems. In this thesis, we describe these frameworks through explicit examples and demonstrate their versatility. Some of these theoretical proposals have been realized in experiments, confirming the applicability of the theories to realistic experimental situations. These studies have led to not only the improved fundamental understanding of non-equilibrium processes in quantum systems, but also suggested entirely different venues for developing quantum technologies.

  5. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name


    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  6. Soft Computing Techniques for Mutual Coupling Reduction in Metamaterial Antenna Array

    Directory of Open Access Journals (Sweden)

    Balamati Choudhury


    Full Text Available Application of soft computing techniques for various metamaterial designs and optimizations is an emerging field in the microwave regime. In this paper, a global optimization technique, namely, particle swarm optimization (PSO, is used for the design and optimization of a square split ring resonator (SSRR having a resonant frequency of 2.4 GHz. The PSO optimizer yields the structural parameters, which is further simulated and validated with the optimized value. This optimized structure results in the mutual coupling reduction in a microstrip antenna array designed for wireless application.

  7. A study of electricity planning in Thailand: An integrated top-down and bottom-up Computable General Equilibrium (CGE) modeling analysis (United States)

    Srisamran, Supree

    This dissertation examines the potential impacts of three electricity policies on the economy of Thailand in terms of macroeconomic performance, income distribution, and unemployment rate. The three considered policies feature responses to potential disruption of imported natural gas used in electricity generation, alternative combinations (portfolios) of fuel feedstock for electricity generation, and increases in investment and local electricity consumption. The evaluation employs Computable General Equilibrium (CGE) approach with the extension of electricity generation and transmission module to simulate the counterfactual scenario for each policy. The dissertation consists of five chapters. Chapter one begins with a discussion of Thailand's economic condition and is followed by a discussion of the current state of electricity generation and consumption and current issues in power generation. The security of imported natural gas in power generation is then briefly discussed. The persistence of imported natural gas disruption has always caused trouble to the country, however, the economic consequences of this disruption have not yet been evaluated. The current portfolio of power generation and the concerns it raises are then presented. The current portfolio of power generation is heavily reliant upon natural gas and so needs to be diversified. Lastly, the anticipated increase in investment and electricity consumption as a consequence of regional integration is discussed. Chapter two introduces the CGE model, its background and limitations. Chapter three reviews relevant literature of the CGE method and its application in electricity policies. In addition, the submodule characterizing the network of electricity generation and distribution and the method of its integration with the CGE model are explained. Chapter four presents the findings of the policy simulations. The first simulation illustrates the consequences of responses to disruptions in natural gas imports

  8. Application of Computer Techniques in Correcting Mild Zygomatic Assymetry With Unilateral Reduction Malarplasty. (United States)

    Zou, Chong; Niu, Feng; Liu, Jian-Feng; Yu, Bing; Chen, Ying; Wang, Meng; Gui, Lai


    Zygomatic assymetry is common in the population, which often requires surgical correction for aesthetic concerns. Previously, surgeons performed the surgery often based on their personal experience and visual evaluation. The purpose of this study was to apply computer techniques in patients with mild zygomatic asymmetry treated with unilateral reduction malarplasty to improve surgical accuracy and reduce preoperative risks. The authors used computer techniques to plan osteotomies, to produce surgical template, and to evaluate the surgical outcome. Postoperative follow-up demonstrated that zygomatic asymmetry was corrected in all the patients without complications. The proposed methodology was considered to be helpful in improving the surgical accuracy and efficiency for treatment of zygomatic asymmetry, while greatly minimizing operative risk.

  9. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi


    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  10. Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques

    Directory of Open Access Journals (Sweden)

    O. P. Bharti


    Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.

  11. Auditors’ Usage of Computer Assisted Audit Tools and Techniques: Empirical Evidence from Nigeria


    Appah Ebimobowei; G.N. Ogbonna; Zuokemefa P. Enebraye


    This study examines use of computer assisted audit tool and techniques in audit practice in the Niger Delta of Nigeria. To achieve this objective, data was collected from primary and secondary sources. The secondary sources were from scholarly books and journals while the primary source involved a well structured questionnaire of three sections of thirty seven items with an average reliability of 0.838. The data collected from the questionnaire were analyzed using relevant descriptive statist...

  12. Computer-assisted total knee arthroplasty using mini midvastus or medial parapatellar approach technique


    Feczko, Peter; Engelmann, Lutz; Arts, Jacobus J.; Campbell, David


    Background Despite the growing evidence in the literature there is still a lack of consensus regarding the use of minimally invasive surgical technique (MIS) in total knee arthroplasty (TKA). Methods A prospective, randomized, international multicentre trial including 69 patients was performed to compare computer-assisted TKA (CAS-TKA) using either mini-midvastus (MIS group) or standard medial parapatellar approach (conventional group). Patients from 3 centers (Maastricht, Zwickau, Adelaide) ...



    Dr. A. Amsavalli; D. Vigneshwaran; M. Lavanya; S. Vijayaraj


    This paper analyses the performance of soft computing techniques to solve single and multi area economic dispatch problems. The paper also includes of inter-area flow constraints on power system network, which are normally ignored in most economic load dispatch problems. Economic load dispatch results for these two area, three area and four area systems are presented in the paper and they determine the importance of multiple area representation of a system in economic load dispatch. Such repr...

  14. Synopsis of Soft Computing Techniques used in Quadrotor UAV Modelling and Control

    Directory of Open Access Journals (Sweden)

    Attila Nemes


    Full Text Available The aim of this article is to give an introduction to quadrotor systems with an overview of soft computing techniques used in quadrotor unmanned aerial vehicle (UAV control, modelling, object following and collision avoidance. The quadrotor system basics, its structure and dynamic model definitions are recapitulated. Further on synopsis is given of previously proposed methods, results evaluated and conclusions drown by authors of referenced publications. The result of this article is a summary of multiple papers on fuzzy logic techniques used in position and altitude control systems for UAVs. Also an overview of fuzzy system based visual servoing for object tracking and collision avoidance is given together with a briefing of quadrotor UAV control techniques efficiency study. Conclusion is that though soft computing methods are widely used with good results, there is still place for much research to be done on find more efficient soft computing tools for simple modelling, robust dynamic control and fast collision avoidance in quadrotor UAV control.

  15. Application of soft computing techniques in coastal study – A review

    Directory of Open Access Journals (Sweden)

    G.S. Dwarakish


    Full Text Available Coastal zone is the triple interface of air, water and land and it is so dynamic in nature which requires expeditious management for its protection. Impulsive change in shoreline and submergence of low lying areas due to sea level rise are the solemn issues that need to be addressed. Indian coastline of about 7516km is under threat due to global warming and related human interventions. Remote sensing data products provide synoptic and repetitive view of the earth in various spatial, spectral, temporal and radiometric resolutions. Hence, it can be used in monitoring coastal areas on a temporal scale. Critical Erosion hotspots have to be given proper protection measures to avoid further damages. Satellite images serve in delineating shoreline and extracting the hotspots to plan the mitigation works. Coastal inundation maps can be created using remote sensing and geospatial technologies by assuming different sea level rises. Those maps can serve as a base for planning management activities. Soft computing techniques like Fuzzy Logic, Artificial Neural Network, Genetic Algorithm and Support Vector Machine are upcoming soft computing algorithms that find its application in classification, regression, pattern recognition, etc., across multi-disciplinary sciences. They can be used in classifying remote sensing images which in turn can be used for studying the coastal vulnerability. The present paper reviews the works carried out for coastal study using conventional remote sensing techniques and the pertinency of soft computing techniques for the same.

  16. Computation of electrostatic fields in anisotropic human tissues using the Finite Integration Technique (FIT

    Directory of Open Access Journals (Sweden)

    V. C. Motresc


    Full Text Available The exposure of human body to electromagnetic fields has in the recent years become a matter of great interest for scientists working in the area of biology and biomedicine. Due to the difficulty of performing measurements, accurate models of the human body, in the form of a computer data set, are used for computations of the fields inside the body by employing numerical methods such as the method used for our calculations, namely the Finite Integration Technique (FIT. A fact that has to be taken into account when computing electromagnetic fields in the human body is that some tissue classes, i.e. cardiac and skeletal muscles, have higher electrical conductivity and permittivity along fibers rather than across them. This property leads to diagonal conductivity and permittivity tensors only when expressing them in a local coordinate system while in a global coordinate system they become full tensors. The Finite Integration Technique (FIT in its classical form can handle diagonally anisotropic materials quite effectively but it needed an extension for handling fully anisotropic materials. New electric voltages were placed on the grid and a new averaging method of conductivity and permittivity on the grid was found. In this paper, we present results from electrostatic computations performed with the extended version of FIT for fully anisotropic materials.

  17. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques (United States)

    Wu, Andy


    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  18. A technique for quantifying wrist motion using four-dimensional computed tomography: approach and validation. (United States)

    Zhao, Kristin; Breighner, Ryan; Holmes, David; Leng, Shuai; McCollough, Cynthia; An, Kai-Nan


    Accurate quantification of subtle wrist motion changes resulting from ligament injuries is crucial for diagnosis and prescription of the most effective interventions for preventing progression to osteoarthritis. Current imaging techniques are unable to detect injuries reliably and are static in nature, thereby capturing bone position information rather than motion which is indicative of ligament injury. A recently developed technique, 4D (three dimensions + time) computed tomography (CT) enables three-dimensional volume sequences to be obtained during wrist motion. The next step in successful clinical implementation of the tool is quantification and validation of imaging biomarkers obtained from the four-dimensional computed tomography (4DCT) image sequences. Measures of bone motion and joint proximities are obtained by: segmenting bone volumes in each frame of the dynamic sequence, registering their positions relative to a known static posture, and generating surface polygonal meshes from which minimum distance (proximity) measures can be quantified. Method accuracy was assessed during in vitro simulated wrist movement by comparing a fiducial bead-based determination of bone orientation to a bone-based approach. The reported errors for the 4DCT technique were: 0.00-0.68 deg in rotation; 0.02-0.30 mm in translation. Results are on the order of the reported accuracy of other image-based kinematic techniques.

  19. Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique

    Directory of Open Access Journals (Sweden)

    Nur Azzammudin Rahmat


    Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.

  20. A data mining technique for discovering distinct patterns of hand signs: implications in user training and computer interface design. (United States)

    Ye, Nong; Li, Xiangyang; Farley, Toni


    Hand signs are considered as one of the important ways to enter information into computers for certain tasks. Computers receive sensor data of hand signs for recognition. When using hand signs as computer inputs, we need to (1) train computer users in the sign language so that their hand signs can be easily recognized by computers, and (2) design the computer interface to avoid the use of confusing signs for improving user input performance and user satisfaction. For user training and computer interface design, it is important to have a knowledge of which signs can be easily recognized by computers and which signs are not distinguishable by computers. This paper presents a data mining technique to discover distinct patterns of hand signs from sensor data. Based on these patterns, we derive a group of indistinguishable signs by computers. Such information can in turn assist in user training and computer interface design.

  1. Maximum a posteriori estimation for SPECT using regularization techniques on massively parallel computers. (United States)

    Butler, C S; Miller, M I


    Single photon emission computed tomography (SPECT) reconstructions performed using maximum a posteriori (penalized likelihood) estimation with the expectation maximization algorithm are discussed. Due to the large number of computations, the algorithms were performed on a massively parallel single-instruction multiple-data computer. Computation times for 200 iterations, using I.J. Good and R.A. Gaskins's (1971) roughness as a rotationally invariant roughness penalty, are shown to be on the order of 5 min for a 64x64 image with 96 view angles on an AMT-DAP 4096 processor machine and 1 min on a MasPar 4096 processor machine. Computer simulations performed using parameters for the Siemens gamma camera and clinical brain scan parameters are presented to compare two regularization techniques-regularization by kernel sieves and penalized likelihood with Good's rotationally invariant roughness measure-to filtered backprojection. Twenty-five independent sets of data are reconstructed for the pie and Hoffman brain phantoms. The average variance and average deviation are examined in various areas of the brain phantom. It is shown that while the geometry of the area examined greatly affects the observed results, in all cases the reconstructions using Good's roughness give superior variance and bias results to the two alternative methods.

  2. Now And Next Generation Sequencing Techniques: Future of Sequence Analysis using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Radhe Shyam Thakur


    Full Text Available Advancements in the field of sequencing techniques resulted in the huge sequenced data to be produced at a very faster rate. It is going cumbersome for the datacenter to maintain the databases. Data mining and sequence analysis approaches needs to analyze the databases several times to reach any efficient conclusion. To cope with such overburden on computer resources and to reach efficient and effective conclusions quickly, the virtualization of the resources and computation on pay as you go concept was introduced and termed as cloud computing. The datacenter’s hardware and software is collectively known as cloud which when available publicly is termed as public cloud. The datacenter’s resources are provided in a virtual mode to the clients via a service provider like Amazon, Google and Joyent which charges on pay as you go manner. The workload is shifted to the provider which is maintained by the required hardware and software upgradation. The service provider manages it by upgrading the requirements in the virtual mode. Basically a virtual environment is created according to the need of the user by taking permission from datacenter via internet, the task is performed and the environment is deleted after the task is over. In this discussion, we are focusing on the basics of cloud computing, the prerequisites and overall working of clouds. Furthermore, briefly the applications of cloud computing in biological systems, especially in comparative genomics, genome informatics and SNP detection with reference to traditional workflow are discussed.

  3. Acceleration of FDTD mode solver by high-performance computing techniques. (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping


    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  4. Impuestos al capital y al trabajo en Colombia: un análisis mediante equilibrio general computable Effect of Taxes on Capital and Labor in Colombia: A Computable General Equilibrium Analysis

    Directory of Open Access Journals (Sweden)

    Jesús Botero Garcia


    Full Text Available Mediante un modelo de equilibrio general computable, calibrado para Colombia, se analiza el impacto de diversas políticas económicas, que afectan el precio relativo de los factores productivos. Se concluye que los estímulos a la inversión, que pueden interpretarse como acciones que disminuyen el precio del capital, propician sin embargo la acumulación de capital, y por esa vía, incrementan la productividad del trabajo, generando efectos positivos netos sobre el empleo. La eliminación de los aportes parafiscales, por su parte, genera una reducción en el costo del trabajo, pero su efecto global sobre el empleo es compensado parcialmente por las acciones fiscales tendientes a generar rentas alternativas que permitan mantener los beneficios asociados a esos aportes. Se sugiere que el esquema ideal sería aquel que establece estímulos a la inversión, focalizados hacia sectores intensivos en empleo, al tiempo que crea redes de protección social adecuadas, para enfrentar los problemas asociados a la pobreza.   Abstract Using a computable general equilibrium model, calibrated for Colombia, it is analyze the impact of various economic policies, which affect the relative price of production factors. The results concluded that the incentives for investment, which can be interpreted as actions that decrease the cost of capital, however lead to the accumulation of capital, and thereby increase the productivity of labour, generating net positive effects on employment. The Elimination of the payroll taxes, for its part, generates a reduction in the cost of labour, but their overall effect on employment is partially offset by the tax measures designed to generate alternative income to keep the benefits associated with these contributions. Finally the suggestion is that the ideal scheme would be one that provides incentives for investment, focused towards employment-intensive sectors, at the time that creates networks of social protection appropriate

  5. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)


    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  6. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability? (United States)

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K


    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  7. Monte Carlo simulation in proton computed tomography: a study of image reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Inocente, Guilherme Franco; Stenico, Gabriela V.; Hormaza, Joel Mesa [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Inst. de Biociencias. Dept. de Fisica e Biofisica


    Full text: The radiation method is one of the most used for cancer treatment. In this context arises therapy with proton beams in front of conventional radiotherapy. It is known that with proton therapy there are more advantages to the patient treated when compared with more conventional methods. The dose distributed along the path, especially in healthy tissues - neighbor the tumor, is smaller and the accuracy of treatment is much better. To carry out the treatment, the patient undergoes a plan through images for visualization and location of the target volume. The main method for obtaining these images is computed tomography X-ray (XCT). For treatment with proton beam this imaging technique can to generate some uncertainties. The purpose of this project is to study the feasibility of reconstructing images generated from the irradiation with proton beams, thereby reducing some inaccuracies, as it will be the same type of radiation as treatment planning, and also to drastically reduce some errors location, since the planning can be done at the same place and just before where the patient is treated. This study aims to obtain a relationship between the intrinsic property of the interaction of photons and protons with matter. For this we use computational simulation based on Monte Carlo method with the code SRIM 2008 and MCNPX v.2.5.0, to reconstruct images using the technique used in conventional computed tomography. (author)

  8. Comparison of the pain levels of computer-controlled and conventional anesthesia techniques in prosthodontic treatment

    Directory of Open Access Journals (Sweden)

    Murat Yenisey


    Full Text Available OBJECTIVE: The objective of this study was to compare the pain levels on opposite sides of the maxilla at needle insertion during delivery of local anesthetic solution and tooth preparation for both conventional and anterior middle superior alveolar (AMSA technique with the Wand computer-controlled local anesthesia application. MATERIAL AND METHODS: Pain scores of 16 patients were evaluated with a 5-point verbal rating scale (VRS and data were analyzed nonparametrically. Pain differences at needle insertion, during delivery of local anesthetic, and at tooth preparation, for conventional versus the Wand technique, were analyzed using the Mann-Whitney U test (p=0.01. RESULTS: The Wand technique had a lower pain level compared to conventional injection for needle insertion (p0.05. CONCLUSIONS: The AMSA technique using the Wand is recommended for prosthodontic treatment because it reduces pain during needle insertion and during delivery of local anaesthetic. However, these two techniques have the same pain levels for tooth preparation.

  9. Application of Soft Computing Techniques to Experimental Space Plasma Turbulence Observations - Genetic Algorithms (United States)

    Bates, I.; Lawton, A.; Breikin, T.; Dunlop, M.

    Space Systems Group, University of Sheffield, U.K. Automatic Control and Systems Engineering, University of Sheffield, U.K. 3 Imperial College, London, U.K.A Genetic Algorithm (GA) approach is presented to solve a problem for turbulent space plasma system modelling in the form of Generalised Frequency Response Functions (GFRFs), using in-situ multi-satellite magnetic field measurements of the plasma turbulence. Soft Computing techniques have now been used for many years in Industry for nonlinear system identification. These techniques approach the problem of understanding a system, e.g. a chemical plant or a jet engine, by model structure selection and fitting parameters of the chosen model for the system using measured inputs and outputs of the system, which can then be used to determine physical characteristics of the system. GAs are one such technique that has been developed, providing essentially a series of solutions that evolve in a way to improve the model. Experimental space plasma turbulence studies have benefited from these System Identification techniques. Multi-point satellite observations provide input and output measurements of the turbulent plasma system. In previous work it was found natural to fit parameters to GFRFs, which derive from Volterra series and lead to quantitative measurements of linear wave-field growth and higher order wave-wave interactions. In previous work these techniques were applied using a Least Squares (LS) parameter fit. Results using GAs are compared to results obtained from the LS approach.

  10. Computed tomography-based virtual fracture reduction techniques in bimandibular fractures. (United States)

    Voss, Jan Oliver; Varjas, Viktor; Raguse, Jan-Dirk; Thieme, Nadine; Richards, R Geoff; Kamer, Lukas


    Computer-assisted preoperative planning (CAPP) usually relies on computed tomography (CT) or cone beam CT (CBCT) and has already become an established technique in craniomaxillofacial surgery. The purpose of this study was to implement CT-based virtual fracture reduction as a key planning feature in patients with bimandibular fractures. Nine routine preoperative CT scans of patients with bilateral mandibular fractures were acquired and post-processed using a mean model of the mandible and Amira software extended by custom-made scripting and programming modules. A computerized technique was developed that allowed three-dimensional modeling, separation of the mandible from the cranium, distinction of the fracture fragments, and virtual fracture reduction. User interaction was required to label the mandibular fragments by landmarks. Virtual fracture reduction was achieved by optionally using the landmarks or the contralateral unaffected side as anatomical references. We successfully elaborated an effective technique for virtual fracture reduction of the mandible using a standard CT protocol. It offers expanded planning options for osteosynthesis construction or the manufacturing of personalized rapid prototyping guides in fracture reduction procedures. CAPP is justified in complex mandibular fractures and may be adopted in addition to routine preoperative CT assessment. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    Directory of Open Access Journals (Sweden)

    Sadik Kamel Gharghan


    Full Text Available In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs. The two techniques, Neural Fuzzy Inference System (ANFIS and Artificial Neural Network (ANN, focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO, Gravitational Search Algorithm (GSA, and Backtracking Search Algorithm (BSA. The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  12. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications. (United States)

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod


    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  13. A New Computational Technique for the Generation of Optimised Aircraft Trajectories (United States)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto


    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  14. Data mining technique for a secure electronic payment transaction using MJk-RSA in mobile computing (United States)

    G. V., Ramesh Babu; Narayana, G.; Sulaiman, A.; Padmavathamma, M.


    Due to the evolution of the Electronic Learning (E-Learning), one can easily get desired information on computer or mobile system connected through Internet. Currently E-Learning materials are easily accessible on the desktop computer system, but in future, most of the information shall also be available on small digital devices like Mobile, PDA, etc. Most of the E-Learning materials are paid and customer has to pay entire amount through credit/debit card system. Therefore, it is very important to study about the security of the credit/debit card numbers. The present paper is an attempt in this direction and a security technique is presented to secure the credit/debit card numbers supplied over the Internet to access the E-Learning materials or any kind of purchase through Internet. A well known method i.e. Data Cube Technique is used to design the security model of the credit/debit card system. The major objective of this paper is to design a practical electronic payment protocol which is the safest and most secured mode of transaction. This technique may reduce fake transactions which are above 20% at the global level.

  15. Binding equilibrium and kinetics of membrane-anchored receptors and ligands in cell adhesion: Insights from computational model systems and theory (United States)

    Weikl, Thomas R.; Hu, Jinglei; Xu, Guang-Kui; Lipowsky, Reinhard


    ABSTRACT The adhesion of cell membranes is mediated by the binding of membrane-anchored receptor and ligand proteins. In this article, we review recent results from simulations and theory that lead to novel insights on how the binding equilibrium and kinetics of these proteins is affected by the membranes and by the membrane anchoring and molecular properties of the proteins. Simulations and theory both indicate that the binding equilibrium constant K2D and the on- and off-rate constants of anchored receptors and ligands in their 2-dimensional (2D) membrane environment strongly depend on the membrane roughness from thermally excited shape fluctuations on nanoscales. Recent theory corroborated by simulations provides a general relation between K2D and the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in 3 dimensions (3D). PMID:27294442

  16. Binding equilibrium and kinetics of membrane-anchored receptors and ligands in cell adhesion: Insights from computational model systems and theory. (United States)

    Weikl, Thomas R; Hu, Jinglei; Xu, Guang-Kui; Lipowsky, Reinhard


    The adhesion of cell membranes is mediated by the binding of membrane-anchored receptor and ligand proteins. In this article, we review recent results from simulations and theory that lead to novel insights on how the binding equilibrium and kinetics of these proteins is affected by the membranes and by the membrane anchoring and molecular properties of the proteins. Simulations and theory both indicate that the binding equilibrium constant [Formula: see text] and the on- and off-rate constants of anchored receptors and ligands in their 2-dimensional (2D) membrane environment strongly depend on the membrane roughness from thermally excited shape fluctuations on nanoscales. Recent theory corroborated by simulations provides a general relation between [Formula: see text] and the binding constant [Formula: see text] of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in 3 dimensions (3D).

  17. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya


    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  18. From Experimental Approaches to Computational Techniques: A Review on the Prediction of Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Fiona Browne


    Full Text Available A crucial step towards understanding the properties of cellular systems in organisms is to map their network of protein-protein interactions (PPIs on a proteomic-wide scale completely and as accurately as possible. Uncovering the diverse function of proteins and their interactions within the cell may improve our understanding of disease and provide a basis for the development of novel therapeutic approaches. The development of large-scale high-throughput experiments has resulted in the production of a large volume of data which has aided in the uncovering of PPIs. However, these data are often erroneous and limited in interactome coverage. Therefore, additional experimental and computational methods are required to accelerate the discovery of PPIs. This paper provides a review on the prediction of PPIs addressing key prediction principles and highlighting the common experimental and computational techniques currently employed to infer PPI networks along with relevant studies in the area.

  19. Applying computer techniques in maxillofacial reconstruction using a fibula flap: a messenger and an evaluation method. (United States)

    Liu, Xiao-jing; Gui, Lai; Mao, Chi; Peng, Xin; Yu, Guang-yan


    While the application of computer-assisted maxillofacial surgery becomes increasingly popular, the translation from virtual models and surgical plans to actual bedside maneuvers and the evaluation of the repeatability of virtual planning remain to be major challenges. The objective of this study was to experiment the technique of using a resin template as a messenger in maxillofacial reconstruction involving a fibula flap. Another aim was to find a quantitative and objective method to evaluate the repeatability of preoperative planning. Seven patients who underwent maxillary or mandibular reconstruction were included in this study. The mean age was 25 years, and the mean follow-up period was 18.7 months. Virtual planning was carried out before surgery. A resin template was made according to the virtual design of bone graft through rapid prototyping technique and served as a guide when surgeons shaped the fibula flap during surgery. The repeatability of the virtual plan was evaluated based on the matching percentage between the actual postoperative model and the computer-generated outcome. All patients demonstrated satisfactory clinical outcomes. The mean repeatability was 87.5% within 1 mm and 96.5% within 2 mm in isolated bone graft. It was 71.4% within 1 mm and 89.9% within 2 mm in reconstructed mandible or maxilla. These results demonstrated that a resin template based on virtual plan and rapid prototyping technique is a reliable messenger to translate from computer modeling to bedside surgical procedures. The repeatability of a virtual plan can be easily and quantitatively evaluated through our three-dimensional differential analysis method.

  20. Computer Program for Calculation of Complex Chemical Equilibrium Compositions and Applications II. Users Manual and Program Description. 2; Users Manual and Program Description (United States)

    McBride, Bonnie J.; Gordon, Sanford


    This users manual is the second part of a two-part report describing the NASA Lewis CEA (Chemical Equilibrium with Applications) program. The program obtains chemical equilibrium compositions of complex mixtures with applications to several types of problems. The topics presented in this manual are: (1) details for preparing input data sets; (2) a description of output tables for various types of problems; (3) the overall modular organization of the program with information on how to make modifications; (4) a description of the function of each subroutine; (5) error messages and their significance; and (6) a number of examples that illustrate various types of problems handled by CEA and that cover many of the options available in both input and output. Seven appendixes give information on the thermodynamic and thermal transport data used in CEA; some information on common variables used in or generated by the equilibrium module; and output tables for 14 example problems. The CEA program was written in ANSI standard FORTRAN 77. CEA should work on any system with sufficient storage. There are about 6300 lines in the source code, which uses about 225 kilobytes of memory. The compiled program takes about 975 kilobytes.

  1. Computational time reduction for sequential batch solutions in GNSS precise point positioning technique (United States)

    Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando


    Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.

  2. Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique (United States)

    Li, Steven X. (Inventor)


    An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.

  3. A singularity extraction technique for computation of antenna aperture fields from singular plane wave spectra

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Breinbjerg, Olav; Frandsen, Aksel


    An effective technique for extracting the singularity of plane wave spectra in the computation of antenna aperture fields is proposed. The singular spectrum is first factorized into a product of a finite function and a singular function. The finite function is inverse Fourier transformed...... numerically using the Inverse Fast Fourier Transform, while the singular function is inverse Fourier transformed analytically, using the Weyl-identity, and the two resulting spatial functions are then convolved to produce the antenna aperture field. This article formulates the theory of the singularity...

  4. A technique for integrating remote minicomputers into a general computer's file system

    CERN Document Server

    Russell, R D


    This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).

  5. Shielding calculations using computer techniques; Calculo de blindajes mediante tecnicas de computacion

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Portilla, M. I.; Marquez, J.


    Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.

  6. A few modeling and rendering techniques for computer graphics and their implementation on ultra hardware (United States)

    Bidasaria, Hari


    Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.

  7. CASAD -- Computer-Aided Sonography of Abdominal Diseases - the concept of joint technique impact

    Directory of Open Access Journals (Sweden)

    T. Deserno


    Full Text Available Ultrasound image is the primary (input information for every ultrasonic examination. Since being used in ultrasound images analysis the both knowledge-base decision support and content-based image retrieval techniques have their own restrictions, the combination of these techniques looks promissory for covering the restrictions of one by advances of another. In this work we have focused on implementation of the proposed combination in the frame of CASAD (Computer-Aided Sonography of Abdominal Diseases system for supplying the ultrasound examiner with a diagnostic-assistant tool based on a data warehouse of standard referenced images. This warehouse serves: to manifest the diagnosis when the ecographist specifies the pathology and then looks through corresponding images to verify his opinion; to suggest a second opinion by automatic analysis of the annotation of relevant images that were assessed from the repository using content-based image retrieval.

  8. A technique for transferring a patient's smile line to a cone beam computed tomography (CBCT) image. (United States)

    Bidra, Avinash S


    Fixed implant-supported prosthodontic treatment for patients requiring a gingival prosthesis often demands that bone and implant levels be apical to the patient's maximum smile line. This is to avoid the display of the prosthesis-tissue junction (the junction between the gingival prosthesis and natural soft tissues) and prevent esthetic failures. Recording a patient's lip position during maximum smile is invaluable for the treatment planning process. This article presents a simple technique for clinically recording and transferring the patient's maximum smile line to cone beam computed tomography (CBCT) images for analysis. The technique can help clinicians accurately determine the need for and amount of bone reduction required with respect to the maximum smile line and place implants in optimal positions. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop


    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  10. Computed tomography assessment of the efficiency of different techniques for removal of root canal filling material

    Energy Technology Data Exchange (ETDEWEB)

    Dall' agnol, Cristina; Barletta, Fernando Branco [Lutheran University of Brazil, Canoas, RS (Brazil). Dental School. Dept. of Dentistry and Endodontics]. E-mail:; Hartmann, Mateus Silveira Martins [Uninga Dental School, Passo Fundo, RS (Brazil). Postgraduate Program in Dentistry


    This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend ({alpha}=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)

  11. Video indexing using a high-performance and low-computation color-based opportunistic technique (United States)

    Ahmed, Mohamed; Karmouch, Ahmed


    Video information, image processing, and computer vision techniques are developing rapidly because of the availability of acquisition, processing, and editing tools that use current hardware and software systems. However, problems still remain in conveying this video data to the end users. Limiting factors are the resource capabilities in distributed architectures and the features of the users' terminals. The efficient use of image processing, video indexing, and analysis techniques can provide users with solutions or alternatives. We see the video stream as a sequence of correlated images containing in its structure temporal events such as camera editing effects and presents a new algorithm for achieving video segmentation, indexing, and key framing tasks. The algorithm is based on color histograms and uses a binary penetration technique. Although much has been done in this area, most work does not adequately consider the optimization of timing performance and processing storage. This is especially the case if the techniques are designed for use in run-time distributed environments. Our main contribution is to blend high performance and storage criteria with the need to achieve effective results. The algorithm exploits the temporal heuristic characteristic of the visual information within a video stream. It takes into consideration the issues of detecting false cuts and missing true cuts due to the movement of the camera, the optical flow of large objects, or both. We provide a discussion, together with results from experiments and from the implementation of our application, to show the merits of the new algorithm as compared to the existing one.

  12. Base Catalytic Approach: A Promising Technique for the Activation of Biochar for Equilibrium Sorption Studies of Copper, Cu(II) Ions in Single Solute System. (United States)

    Hamid, Sharifah Bee Abdul; Chowdhury, Zaira Zaman; Zain, Sharifuddin Mohammad


    This study examines the feasibility of catalytically pretreated biochar derived from the dried exocarp or fruit peel of mangostene with Group I alkali metal hydroxide (KOH). The pretreated char was activated in the presence of carbon dioxide gas flow at high temperature to upgrade its physiochemical properties for the removal of copper, Cu(II) cations in single solute system. The effect of three independent variables, including temperature, agitation time and concentration, on sorption performance were carried out. Reaction kinetics parameters were determined by using linear regression analysis of the pseudo first, pseudo second, Elovich and intra-particle diffusion models. The regression co-efficient, R² values were best for the pseudo second order kinetic model for all the concentration ranges under investigation. This implied that Cu(II) cations were adsorbed mainly by chemical interactions with the surface active sites of the activated biochar. Langmuir, Freundlich and Temkin isotherm models were used to interpret the equilibrium data at different temperature. Thermodynamic studies revealed that the sorption process was spontaneous and endothermic. The surface area of the activated sample was 367.10 m²/g, whereas before base activation, it was only 1.22 m²/g. The results elucidated that the base pretreatment was efficient enough to yield porous carbon with an enlarged surface area, which can successfully eliminate Cu(II) cations from waste water.

  13. Base Catalytic Approach: A Promising Technique for the Activation of Biochar for Equilibrium Sorption Studies of Copper, Cu(II Ions in Single Solute System

    Directory of Open Access Journals (Sweden)

    Sharifah Bee Abdul Hamid


    Full Text Available This study examines the feasibility of catalytically pretreated biochar derived from the dried exocarp or fruit peel of mangostene with Group I alkali metal hydroxide (KOH. The pretreated char was activated in the presence of carbon dioxide gas flow at high temperature to upgrade its physiochemical properties for the removal of copper, Cu(II cations in single solute system. The effect of three independent variables, including temperature, agitation time and concentration, on sorption performance were carried out. Reaction kinetics parameters were determined by using linear regression analysis of the pseudo first, pseudo second, Elovich and intra-particle diffusion models. The regression co-efficient, R2 values were best for the pseudo second order kinetic model for all the concentration ranges under investigation. This implied that Cu(II cations were adsorbed mainly by chemical interactions with the surface active sites of the activated biochar. Langmuir, Freundlich and Temkin isotherm models were used to interpret the equilibrium data at different temperature. Thermodynamic studies revealed that the sorption process was spontaneous and endothermic. The surface area of the activated sample was 367.10 m2/g, whereas before base activation, it was only 1.22 m2/g. The results elucidated that the base pretreatment was efficient enough to yield porous carbon with an enlarged surface area, which can successfully eliminate Cu(II cations from waste water.

  14. Base Catalytic Approach: A Promising Technique for the Activation of Biochar for Equilibrium Sorption Studies of Copper, Cu(II) Ions in Single Solute System (United States)

    Hamid, Sharifah Bee Abdul; Chowdhury, Zaira Zaman; Zain, Sharifuddin Mohammad


    This study examines the feasibility of catalytically pretreated biochar derived from the dried exocarp or fruit peel of mangostene with Group I alkali metal hydroxide (KOH). The pretreated char was activated in the presence of carbon dioxide gas flow at high temperature to upgrade its physiochemical properties for the removal of copper, Cu(II) cations in single solute system. The effect of three independent variables, including temperature, agitation time and concentration, on sorption performance were carried out. Reaction kinetics parameters were determined by using linear regression analysis of the pseudo first, pseudo second, Elovich and intra-particle diffusion models. The regression co-efficient, R2 values were best for the pseudo second order kinetic model for all the concentration ranges under investigation. This implied that Cu(II) cations were adsorbed mainly by chemical interactions with the surface active sites of the activated biochar. Langmuir, Freundlich and Temkin isotherm models were used to interpret the equilibrium data at different temperature. Thermodynamic studies revealed that the sorption process was spontaneous and endothermic. The surface area of the activated sample was 367.10 m2/g, whereas before base activation, it was only 1.22 m2/g. The results elucidated that the base pretreatment was efficient enough to yield porous carbon with an enlarged surface area, which can successfully eliminate Cu(II) cations from waste water. PMID:28788595

  15. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams (United States)

    Ghiorso, M. S.


    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  16. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y


    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  17. Binary systems from quantum cluster equilibrium theory. (United States)

    Brüssel, Marc; Perlt, Eva; Lehmann, Sebastian B C; von Domaros, Michael; Kirchner, Barbara


    An extension of the quantum cluster equilibrium theory to treat binary mixtures is introduced in this work. The necessary equations are derived and a possible implementation is presented. In addition an alternative sampling procedure using widely available experimental data for the quantum cluster equilibrium approach is suggested and tested. An illustrative example, namely, the binary mixture of water and dimethyl sulfoxide, is given to demonstrate the new approach. A basic cluster set is introduced containing the relevant cluster motifs. The populations computed by the quantum cluster equilibrium approach are compared to the experimental data. Furthermore, the excess Gibbs free energy is computed and compared to experiments as well.

  18. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.


    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  19. Computational intelligence techniques for identifying the pectoral muscle region in mammograms (United States)

    Rickard, H. Erin; Villao, Ruben G.; Elmaghraby, Adel S.


    Segmentation of the pectoral muscle is an imperative task in mammographic image analysis. The pectoral edge is specifically examined by radiologists for abnormal axillary lymph nodes, serves as one of the axes in 3-dimensional reconstructions, and is one of the fundamental landmarks in mammogram registration and comparison. However, this region interferes with intensity-based image processing methods and may bias cancer detection algorithms. The purpose of this study was to develop and evaluate computational intelligence techniques for identifying the pectoral muscle region in medio-lateral oblique (MLO) view mammograms. After removal of the background region, the mammograms were segmented using a K-clustered self-organizing map (SOM). Morphological operations were then applied to obtain an initial estimate of the pectoral muscle region. Shape-based analysis determined which of the K estimates to use in the final segmentation. The algorithm has been applied to 250 MLO-view Lumisys mammograms from the Digital Database for Screening Mammography (DDSM). Upon examination, it was discovered that three of the original mammograms did not contain the pectoral muscle and one contained a clear defect. Of the 246 remaining, 95.94% were considered to have successfully identified the pectoral muscle region. The results provide a compelling argument for the effectiveness of computational intelligence techniques for identifying the pectoral muscle region in MLO-view mammograms.

  20. Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation (United States)

    Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.


    Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709

  1. Microbial taxonomy in the era of OMICS: application of DNA sequences, computational tools and techniques. (United States)

    Mahato, Nitish Kumar; Gupta, Vipin; Singh, Priya; Kumari, Rashmi; Verma, Helianthous; Tripathi, Charu; Rani, Pooja; Sharma, Anukriti; Singhvi, Nirjara; Sood, Utkarsh; Hira, Princy; Kohli, Puneet; Nayyar, Namita; Puri, Akshita; Bajaj, Abhay; Kumar, Roshan; Negi, Vivek; Talwar, Chandni; Khurana, Himani; Nagar, Shekhar; Sharma, Monika; Mishra, Harshita; Singh, Amit Kumar; Dhingra, Gauri; Negi, Ram Krishan; Shakarad, Mallikarjun; Singh, Yogendra; Lal, Rup


    The current prokaryotic taxonomy classifies phenotypically and genotypically diverse microorganisms using a polyphasic approach. With advances in the next-generation sequencing technologies and computational tools for analysis of genomes, the traditional polyphasic method is complemented with genomic data to delineate and classify bacterial genera and species as an alternative to cumbersome and error-prone laboratory tests. This review discusses the applications of sequence-based tools and techniques for bacterial classification and provides a scheme for more robust and reproducible bacterial classification based on genomic data. The present review highlights promising tools and techniques such as ortho-Average Nucleotide Identity, Genome to Genome Distance Calculator and Multi Locus Sequence Analysis, which can be validly employed for characterizing novel microorganisms and assessing phylogenetic relationships. In addition, the review discusses the possibility of employing metagenomic data to assess the phylogenetic associations of uncultured microorganisms. Through this article, we present a review of genomic approaches that can be included in the scheme of taxonomy of bacteria and archaea based on computational and in silico advances to boost the credibility of taxonomic classification in this genomic era.

  2. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal


    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  3. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques. (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E


    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  4. 13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (United States)

    Speer, T.; Boudjema, F.; Lauret, J.; Naumann, A.; Teodorescu, L.; Uwer, P.

    "Beyond the Cutting edge in Computing" Fundamental research is dealing, by definition, with the two extremes: the extremely small and the extremely large. The LHC and Astroparticle physics experiments will soon offer new glimpses beyond the current frontiers. And the computing infrastructure to support such physics research needs to look beyond the cutting edge. Once more it seems that we are on the edge of a computing revolution. But perhaps what we are seeing now is a even more epochal change where not only the pace of the revolution is changing, but also its very nature. Change is not any more an "event" meant to open new possibilities that have to be understood first and exploited then to prepare the ground for a new leap. Change is becoming the very essence of the computing reality, sustained by a continuous flow of technical and paradigmatic innovation. The hardware is definitely moving toward more massive parallelism, in a breathtaking synthesis of all the past techniques of concurrent computation. New many-core machines offer opportunities for all sorts of Single/Multiple Instructions, Single/Multiple Data and Vector computations that in the past required specialised hardware. At the same time, all levels of virtualisation imagined till now seem to be possible via Clouds, and possibly many more. Information Technology has been the working backbone of the Global Village, and now, in more than one sense, it is becoming itself the Global Village. Between these two, the gap between the need for adapting applications to exploit the new hardware possibilities and the push toward virtualisation of resources is widening, creating more challenges as technical and intellectual progress continues. ACAT 2010 proposes to explore and confront the different boundaries of the evolution of computing, and its possible consequences on our scientific activity. What do these new technologies entail for physics research? How will physics research benefit from this revolution in

  5. An enhanced technique for mobile cloudlet offloading with reduced computation using compression in the cloud (United States)

    Moro, A. C.; Nadesh, R. K.


    The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.

  6. Three-dimensional imaging of trabecular bone using the computer numerically controlled milling technique. (United States)

    Beck, J D; Canfield, B L; Haddock, S M; Chen, T J; Kothari, M; Keaveny, T M


    Although various techniques exist for high-resolution, three-dimensional imaging of trabecular bone, a common limitation is that resolution depends on specimen size. Most techniques also have limited availability due to their expense and complexity. We therefore developed a simple, accurate technique that has a resolution that is independent of specimen size. Thin layers are serially removed from an embedded bone specimen using a computer numerically controlled (CNC) milling machine, and each exposed cross section is imaged using a low-magnification digital camera. Precise positioning of the specimen under the camera is achieved using the programmable feature of the CNC milling machine. Large specimens are imaged without loss of resolution by moving the specimen under the camera such that an array of field-of-views spans the full cross section. The images from each field-of-view are easily assembled and registered in the postprocessing. High-contrast sections are achieved by staining the bone black with silver nitrate and embedding it in whitened methylmethacrylate. Due to the high contrast nature and high resolution of the images, thresholding at a single value yielded excellent predictions of morphological parameters such as bone volume fraction (mean +/- SD percent error = 0.70 +/- 4.28%). The main limitations of this fully automated "CNC milling technique" are that the specimen is destroyed and the process is relatively slow. However, because of its accuracy, independence of image resolution from specimen size, and ease of implementation, this new technique is an excellent method for ex situ imaging of trabecular architecture, particularly when high resolution is required.

  7. Validation of a novel technique for creating simulated radiographs using computed tomography datasets. (United States)

    Mendoza, Patricia; d'Anjou, Marc-André; Carmel, Eric N; Fournier, Eric; Mai, Wilfried; Alexander, Kate; Winter, Matthew D; Zwingenberger, Allison L; Thrall, Donald E; Theoret, Christine


    Understanding radiographic anatomy and the effects of varying patient and radiographic tube positioning on image quality can be a challenge for students. The purposes of this study were to develop and validate a novel technique for creating simulated radiographs using computed tomography (CT) datasets. A DICOM viewer (ORS Visual) plug-in was developed with the ability to move and deform cuboidal volumetric CT datasets, and to produce images simulating the effects of tube-patient-detector distance and angulation. Computed tomographic datasets were acquired from two dogs, one cat, and one horse. Simulated radiographs of different body parts (n = 9) were produced using different angles to mimic conventional projections, before actual digital radiographs were obtained using the same projections. These studies (n = 18) were then submitted to 10 board-certified radiologists who were asked to score visualization of anatomical landmarks, depiction of patient positioning, realism of distortion/magnification, and image quality. No significant differences between simulated and actual radiographs were found for anatomic structure visualization and patient positioning in the majority of body parts. For the assessment of radiographic realism, no significant differences were found between simulated and digital radiographs for canine pelvis, equine tarsus, and feline abdomen body parts. Overall, image quality and contrast resolution of simulated radiographs were considered satisfactory. Findings from the current study indicated that radiographs simulated using this new technique are comparable to actual digital radiographs. Further studies are needed to apply this technique in developing interactive tools for teaching radiographic anatomy and the effects of varying patient and tube positioning. © 2013 American College of Veterinary Radiology.

  8. Nano-computed tomography. Technique and applications; Nanocomputertomografie. Technik und Applikationen

    Energy Technology Data Exchange (ETDEWEB)

    Kampschulte, M.; Sender, J.; Litzlbauer, H.D.; Althoehn, U.; Schwab, J.D.; Alejandre-Lafont, E.; Martels, G.; Krombach, G.A. [University Hospital Giessen (Germany). Dept. of Diagnostic and Interventional Radiology; Langheinirch, A.C. [BG Trauma Hospital Frankfurt/Main (Germany). Dept. of Diagnostic and Interventional Radiology


    Nano-computed tomography (nano-CT) is an emerging, high-resolution cross-sectional imaging technique and represents a technical advancement of the established micro-CT technology. Based on the application of a transmission target X-ray tube, the focal spot size can be decreased down to diameters less than 400 nanometers (nm). Together with specific detectors and examination protocols, a superior spatial resolution up to 400 nm (10 % MTF) can be achieved, thereby exceeding the resolution capacity of typical micro-CT systems. The technical concept of nano-CT imaging as well as the basics of specimen preparation are demonstrated exemplarily. Characteristics of atherosclerotic plaques (intraplaque hemorrhage and calcifications) in a murine model of atherosclerosis (ApoE{sub (-/-)}/LDLR{sub (-/-)} double knockout mouse) are demonstrated in the context of superior spatial resolution in comparison to micro-CT. Furthermore, this article presents the application of nano-CT for imaging cerebral microcirculation (murine), lung structures (porcine), and trabecular microstructure (ovine) in contrast to micro-CT imaging. This review shows the potential of nano-CT as a radiological method in biomedical basic research and discusses the application of experimental, high resolution CT techniques in consideration of other high resolution cross-sectional imaging techniques.

  9. Signal Amplification Technique (SAT): an approach for improving resolution and reducing image noise in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Phelps, M.E.; Huang, S.C.; Hoffman, E.J.; Plummer, D.; Carson, R.


    Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described.

  10. PyDII: A python framework for computing equilibrium intrinsic point defect concentrations and extrinsic solute site preferences in intermetallic compounds (United States)

    Ding, Hong; Medasani, Bharat; Chen, Wei; Persson, Kristin A.; Haranczyk, Maciej; Asta, Mark


    Point defects play an important role in determining the structural stability and mechanical behavior of intermetallic compounds. To help quantitatively understand the point defect properties in these compounds, we developed PyDII, a Python program that performs thermodynamic calculations of equilibrium intrinsic point defect concentrations and extrinsic solute site preferences in intermetallics. The algorithm implemented in PyDII is built upon a dilute-solution thermodynamic formalism with a set of defect excitation energies calculated from first-principles density-functional theory methods. The analysis module in PyDII enables automated calculations of equilibrium intrinsic antisite and vacancy concentrations as a function of composition and temperature (over ranges where the dilute solution formalism is accurate) and the point defect concentration changes arising from addition of an extrinsic substitutional solute species. To demonstrate the applications of PyDII, we provide examples for intrinsic point defect concentrations in NiAl and Al3 V and site preferences for Ti, Mo and Fe solutes in NiAl.

  11. Sensitivity analysis and stability charts for uniform slopes computed via the MLD methods in the frame of the limit equilibrium theory (United States)

    Ausilia Paparo, Maria; Tinti, Stefano


    Stability charts represent a graphical solution to derive the safety factor (F) without incurring the difficulties of mathematical and numerical methods for the analysis of slope stability, widely used in the engineering field: employed in a preliminary phase of analysis, the consultation of charts allows one to determine the approximate equilibrium conditions. The first to develop this method is Taylor (1948) who made them of common use: his stability charts are the relationships between the height and the inclination of a schematic slope, for particular types of failure surface (toe circle, circle slope, and midpoint circle) and for different values of friction angle. Thereafter the charts have become more detailed and complete (Janbu, 1968), thanks to the continuous introduction of new methods, like the limit equilibrium method (LEM), the limit analysis method and the finite element method (FEM). The aim of this work is to compare sets of stability charts found in literature (Michalowski, 1997; 2002; Li et alii, 2009; Steward et alii, 2011; Zhang et alii, 2011) with new charts obtained with the results obtained by means of the method of minimum lithostatic deviation (MLD) introduced by Tinti and Manucci (2006 and 2008) for 2D problems: the slope is a homogenous body and we analyze different cases, by varying the geometry (e.g. the slope angle and height), the geotechnical parameters (such as cohesion and angle of friction), the pore pressure and the external loads (as seismic or hydrostatic loadings) treated as quasi-static forcing.

  12. Computational intelligence techniques for comparative genomics dedicated to Prof. Allam Appa Rao on the occasion of his 65th birthday

    CERN Document Server

    Gunjan, Vinit


    This Brief highlights Informatics and related techniques to Computer Science Professionals, Engineers, Medical Doctors, Bioinformatics researchers and other interdisciplinary researchers. Chapters include the Bioinformatics of Diabetes and several computational algorithms and statistical analysis approach to effectively study the disorders and possible causes along with medical applications.

  13. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records. (United States)

    Duz, Marco; Marshall, John F; Parkin, Tim


    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free

  14. Towards breaking temperature equilibrium in multi-component Eulerian schemes

    Energy Technology Data Exchange (ETDEWEB)

    Grove, John W [Los Alamos National Laboratory; Masser, Thomas [Los Alamos National Laboratory


    We investigate the effects ofthermal equilibrium on hydrodynamic flows and describe models for breaking the assumption ofa single temperature for a mixture of components in a cell. A computational study comparing pressure-temperature equilibrium simulations of two dimensional implosions with explicit front tracking is described as well as implementation and J-D calculations for non-equilibrium temperature methods.

  15. Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study

    Directory of Open Access Journals (Sweden)

    Carlos Figuera


    Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero

  16. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography. (United States)

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz


    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  17. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree


    Conventional gasoline comprises of a large number of hydrocarbons that makes it difficult to utilize in a model for prediction of its properties. Modeling is needed for a better understanding of the fuel flow and combustion behavior that are essential to enhance fuel quality and improve engine...... performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...

  18. High-resolution EEG techniques for brain-computer interface applications. (United States)

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio


    High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a

  19. Understanding refraction contrast using a comparison of absorption and refraction computed tomographic techniques (United States)

    Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.


    Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.

  20. Revealing −1 Programmed Ribosomal Frameshifting Mechanisms by Single-Molecule Techniques and Computational Methods

    Directory of Open Access Journals (Sweden)

    Kai-Chun Chang


    Full Text Available Programmed ribosomal frameshifting (PRF serves as an intrinsic translational regulation mechanism employed by some viruses to control the ratio between structural and enzymatic proteins. Most viral mRNAs which use PRF adapt an H-type pseudoknot to stimulate −1 PRF. The relationship between the thermodynamic stability and the frameshifting efficiency of pseudoknots has not been fully understood. Recently, single-molecule force spectroscopy has revealed that the frequency of −1 PRF correlates with the unwinding forces required for disrupting pseudoknots, and that some of the unwinding work dissipates irreversibly due to the torsional restraint of pseudoknots. Complementary to single-molecule techniques, computational modeling provides insights into global motions of the ribosome, whose structural transitions during frameshifting have not yet been elucidated in atomic detail. Taken together, recent advances in biophysical tools may help to develop antiviral therapies that target the ubiquitous −1 PRF mechanism among viruses.

  1. A computationally assisted spectroscopic technique to measure secondary electron emission coefficients in radio frequency plasmas

    CERN Document Server

    Daksha, M; Schuengel, E; Korolov, I; Derzsi, A; Koepke, M; Donko, Z; Schulze, J


    A Computationally Assisted Spectroscopic Technique to measure secondary electron emission coefficients ($\\gamma$-CAST) in capacitively-coupled radio-frequency plasmas is proposed. This non-intrusive, sensitive diagnostic is based on a combination of Phase Resolved Optical Emission Spectroscopy and particle-based kinetic simulations. In such plasmas (under most conditions in electropositive gases) the spatio-temporally resolved electron-impact excitation/ionization rate features two distinct maxima adjacent to each electrode at different times within each RF period. While one maximum is the consequence of the energy gain of electrons due to sheath expansion, the second maximum is produced by secondary electrons accelerated towards the plasma bulk by the sheath electric field at the time of maximum voltage drop across the adjacent sheath. Due to these different excitation/ionization mechanisms, the ratio of the intensities of these maxima is very sensitive to the secondary electron emission coefficient $\\gamma$...

  2. Time-of-flight camera technique for augmented reality in computer-assisted interventions (United States)

    Mersmann, Sven; Müller, Michael; Seitel, Alexander; Arnegger, Florian; Tetzlaff, Ralf; Dinkel, Julien; Baumhauer, Matthias; Schmied, Bruno; Meinzer, Hans-Peter; Maier-Hein, Lena


    Augmented reality (AR) for enhancement of intra-operative images is gaining increasing interest in the field of navigated medical interventions. In this context, various imaging modalities such as ultrasound (US), C-Arm computed tomography (CT) and endoscopic images have been applied to acquire intra-operative information about the patient's anatomy. The aim of this paper was to evaluate the potential of the novel Time-of-Flight (ToF) camera technique as means for markerless intra-operative registration. For this purpose, ToF range data and corresponding CT images were acquired from a set of explanted non-transplantable human and porcine organs equipped with a set of marker that served as targets. Based on a rigid matching of the surfaces generated from the ToF images with the organ surfaces generated from the CT data, the targets extracted from the planning images were superimposed on the 2D ToF intensity images, and the target visualization error (TVE) was computed as quality measure. Color video data of the same organs were further used to assess the TVE of a previously proposed marker-based registration method. The ToF-based registration showed promising accuracy yielding a mean TVE of 2.5+/-1.1 mm compared to 0.7+/-0.4 mm with the marker-based approach. Furthermore, the target registration error (TRE) was assessed to determine the anisotropy in the localization error of ToF image data. The TRE was 8.9+/- 4.7 mm on average indicating a high localization error in the viewing direction of the camera. Nevertheless, the young ToF technique may become a valuable means for intra-operative surface acquisition. Future work should focus on the calibration of systematic distance errors.

  3. A comparison of modelling techniques for computing wall stress in abdominal aortic aneurysms

    Directory of Open Access Journals (Sweden)

    McGloughlin Timothy M


    Full Text Available Abstract Background Aneurysms, in particular abdominal aortic aneurysms (AAA, form a significant portion of cardiovascular related deaths. There is much debate as to the most suitable tool for rupture prediction and interventional surgery of AAAs, and currently maximum diameter is used clinically as the determining factor for surgical intervention. Stress analysis techniques, such as finite element analysis (FEA to compute the wall stress in patient-specific AAAs, have been regarded by some authors to be more clinically important than the use of a "one-size-fits-all" maximum diameter criterion, since some small AAAs have been shown to have higher wall stress than larger AAAs and have been known to rupture. Methods A patient-specific AAA was selected from our AAA database and 3D reconstruction was performed. The AAA was then modelled in this study using three different approaches, namely, AAA(SIMP, AAA(MOD and AAA(COMP, with each model examined using linear and non-linear material properties. All models were analysed using the finite element method for wall stress distributions. Results Wall stress results show marked differences in peak wall stress results between the three methods. Peak wall stress was shown to reduce when more realistic parameters were utilised. It was also noted that wall stress was shown to reduce by 59% when modelled using the most accurate non-linear complex approach, compared to the same model without intraluminal thrombus. Conclusion The results here show that using more realistic parameters affect resulting wall stress. The use of simplified computational modelling methods can lead to inaccurate stress distributions. Care should be taken when examining stress results found using simplified techniques, in particular, if the wall stress results are to have clinical importance.


    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai


    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  5. Wheeze sound analysis using computer-based techniques: a systematic review. (United States)

    Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian


    Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.

  6. Statistical inference on associated fertility life table parameters using jackknife technique: computational aspects. (United States)

    Maia, A de H; Luiz, A J; Campanhola, C


    Knowledge of population growth potential is crucial for studying population dynamics and for establishing management tactics for pest control. Estimation of population growth can be achieved with fertility life tables because they synthesize data on reproduction and mortality of a population. The five main parameters associated with a fertility life table are as follows: (1) the net reproductive rate (Ro), (2) the intrinsic rate of increase (rm), 3) the mean generation time (T), (4) the doubling time (Dt), and (5) the finite rate of increase (lambda). Jackknife and bootstrap techniques are used to calculate the variance of the rm estimate, which can be extended to the other parameters of life tables. Those methods are computer-intensive, their application requires the development of efficient algorithms, and their implementation is based on a programming language that encompasses quickness and reliability. The objectives of this article are to discuss statistical and computational aspects related to estimation of life table parameters and to present a SAS program that uses jackknife to estimate parameters for fertility life tables. The SAS program presented here allows the calculation of confidence intervals for all estimated parameters, as well as provides one-sided and two-sided t-tests to perform pairwise or multiple comparison between groups, with their respective P values.


    Directory of Open Access Journals (Sweden)

    B. Raja Singh


    Full Text Available Pulverised coal preparation system (Coal mills is the heart of coal-fired power plants. The complex nature of a milling process, together with the complex interactions between coal quality and mill conditions, would lead to immense difficulties for obtaining an effective mathematical model of the milling process. In this paper, vertical spindle coal mills (bowl mill that are widely used in coal-fired power plants, is considered for the model development and its pulverised fuel flow rate is computed using the model. For the steady state coal mill model development, plant measurements such as air-flow rate, differential pressure across mill etc., are considered as inputs/outputs. The mathematical model is derived from analysis of energy, heat and mass balances. An Evolutionary computation technique is adopted to identify the unknown model parameters using on-line plant data. Validation results indicate that this model is accurate enough to represent the whole process of steady state coal mill dynamics. This coal mill model is being implemented on-line in a 210 MW thermal power plant and the results obtained are compared with plant data. The model is found accurate and robust that will work better in power plants for system monitoring. Therefore, the model can be used for online monitoring, fault detection, and control to improve the efficiency of combustion.

  8. Assessment of bone quality by the technique of multispiral computer tomography in patients with chronic osteomyelitis

    Directory of Open Access Journals (Sweden)

    G. V. Dyachkova


    Full Text Available Purpose - to study the roentgenomorphological features of the lower limb long bones in patients with chronic osteomyelitis using the technique of multi-spiral computer tomography (MSCT, and to propose the complex of parameters to assess bone quality. Material and methods. Roentgenography and computer tomography of the hips were performed in 49 patients with chronic osteomyelitis of long bones of lower extremities. The studies made using computer tomographs GE Light Speed VCT, Toshiba Aquilion-64, Somatom Smile. Results. The changes in bone structure of proximal femur were characterized by extremely marked polymorphism, and they almost didn’t repeat in the anatomical component. The cortical plate had heterogenous structure with resorption zones in the area of its transfer to the shaft. The character of roentgenomorphological changes in the shaft was individual in all the patients, but there were common manifestations as well which consisted in thickening of the cortical plate, different intensity of periosteal and endosteal layers. The cortical plate was significantly different in density, which exceeded 1700 HU in some places. When osteomyelitic process localized in the knee marked changes affected its all components, they manifested themselves in extended osteoporosis and local osteosclerosis. When osteomyelitic process localized in proximal tibia extensive resorption zones observed, the cortical plate thinned in proximal parts, its density was not more than 350 HU. Conclusion. The data demonstrated that bone quality in patients with chronic osteomyelitis had significant deviations from normal values in terms of changing both its density and architectonics. The deviations consisted in bone density decrease in the meta-epiphyseal part regardless of the process localization, in highly variable density values of the cortical plate as a result of its thickening or thinning, presence of resorption or sclerosis areas.

  9. Human-computer dialogue: Interaction tasks and techniques. Survey and categorization (United States)

    Foley, J. D.


    Interaction techniques are described. Six basic interaction tasks, requirements for each task, requirements related to interaction techniques, and a technique's hardware prerequisites affective device selection are discussed.

  10. Quando a fase de equilíbrio pode ser suprimida nos exames de tomografia computadorizada de abdome? What is the real role of the equilibrium phase in abdominal computed tomography?

    Directory of Open Access Journals (Sweden)

    Priscila Silveira Salvadori


    Full Text Available OBJETIVO: Avaliar a necessidade de realização da fase de equilíbrio nos exames de tomografia computadorizada de abdome. MATERIAIS E MÉTODOS: Realizou-se estudo retrospectivo, transversal e observacional, avaliando 219 exames consecutivos de tomografia computadorizada de abdome com contraste intravenoso, realizados num período de três meses, com diversas indicações clínicas. Para cada exame foram emitidos dois pareceres, um avaliando o exame sem a fase de equilíbrio (primeira análise e o outro avaliando todas as fases em conjunto (segunda análise. Ao final de cada avaliação, foi estabelecido se houve mudança nos diagnósticos principais e secundários, entre a primeira e a segunda análise. Foi utilizada a extensão do teste exato de Fisher para avaliar a modificação dos diagnósticos principais (p 0,999. Com relação aos diagnósticos secundários, cinco exames (2,3% foram modificados. CONCLUSÃO: Para indicações clínicas como estadiamento tumoral, abdome agudo e pesquisa de coleção abdominal, a fase de equilíbrio não acrescenta contribuição diagnóstica expressiva, podendo ser suprimida dos protocolos de exame.OBJECTIVE: To evaluate the role of the equilibrium phase in abdominal computed tomography. MATERIALS AND METHODS: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis, and a second reading of these phases added to the equilibrium phase (second analysis. At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher

  11. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.


    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  12. Brain-computer interface: changes in performance using virtual reality techniques. (United States)

    Ron-Angevin, Ricardo; Díaz-Estrella, Antonio


    The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.

  13. Multiple Target Localization with Bistatic Radar Using Heuristic Computational Intelligence Techniques

    Directory of Open Access Journals (Sweden)

    Fawad Zaman


    Full Text Available We assume Bistatic Phase Multiple Input Multiple Output radar having passive Centrosymmetric Cross Shape Sensor Array (CSCA on its receiver. Let the transmitter of this Bistatic radar send coherent signals using a subarray that gives a fairly wide beam with a large solid angle so as to cover up any potential relevant target in the near field. We developed Heuristic Computational Intelligence (HCI based techniques to jointly estimate the range, amplitude, and elevation and azimuth angles of these multiple targets impinging on the CSCA. In this connection, first the global search optimizers, that is,are developed separately Particle Swarm Optimization (PSO and Differential Evolution (DE are developed separately, and, to enhance the performances further, both of them are hybridized with a local search optimizer called Active Set Algorithm (ASA. Initially, the performance of PSO, DE, PSO hybridized with ASA, and DE hybridized with ASA are compared with each other and then with some traditional techniques available in literature using root mean square error (RMSE as figure of merit.

  14. Computed tomography: a powerful imaging technique in the fields of dimensional metrology and quality control (United States)

    Probst, Gabriel; Boeckmans, Bart; Dewulf, Wim; Kruth, Jean-Pierre


    X-ray computed tomography (CT) is slowly conquering its space in the manufacturing industry for dimensional metrology and quality control purposes. The main advantage is its non-invasive and non-destructive character. Currently, CT is the only measurement technique that allows full 3D visualization of both inner and outer features of an object through a contactless probing system. Using hundreds of radiographs, acquired while rotating the object, a 3D representation is generated and dimensions can be verified. In this research, this non-contact technique was used for the inspection of assembled components. A dental cast model with 8 implants, connected by a screwed retained bar made of titanium. The retained bar includes a mating interface connection that should ensure a perfect fitting without residual stresses when the connection is fixed with screws. CT was used to inspect the mating interfaces between these two components. Gaps at the connections can lead to bacterial growth and potential inconvenience for the patient who would have to face a new surgery to replace his/hers prosthesis. With the aid of CT, flaws in the design or manufacturing process that could lead to gaps at the connections could be assessed.

  15. Experimental determination of thermodynamic equilibrium in biocatalytic transamination

    DEFF Research Database (Denmark)

    Tufvesson, Pär; Jensen, Jacob Skibsted; Kroutil, Wolfgang


    The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones....... Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore...

  16. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy. (United States)

    Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald


    To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations done with and without particle

  17. Experimental investigation of liquid chromatography columns by means of computed tomography

    DEFF Research Database (Denmark)

    Astrath, D.U.; Lottes, F.; Vu, Duc Thuong


    The efficiency of packed chromatographic columns was investigated experimentally by means of computed tomography (CT) techniques. The measurements were carried out by monitoring tracer fronts in situ inside the chromatographic columns. The experimental results were fitted using the equilibrium...

  18. ISIDORE, a probe for in situ trace metal speciation based on Donnan membrane technique with related electrochemical detection part 1: Equilibrium measurements

    Energy Technology Data Exchange (ETDEWEB)

    Parat, Corinne, E-mail: [Université de Pau et des Pays de l’Adour, CNRS UMR 5254, LCABIE, 64000 Pau (France); Pinheiro, J.P. [Université de Lorraine/ENSG, CNRS UMR 7360, LIEC, 54500 Nancy (France)


    This work presents the development of a new probe (ISIDORE probe) based on the hyphenation of a Donnan Membrane Technique device (DMT) to a screen-printed electrode through a flow-cell to determine the free zinc, cadmium and lead ion concentration in natural samples, such as a freshwater river. The probe displays many advantages namely: (i) the detection can be performed on-site, which avoids all problems inherent to sampling, transport and storage; (ii) the low volume of the acceptor solution implies shorter equilibration times; (ii) the electrochemical detection system allows monitoring the free ion concentration in the acceptor solution without sampling. - Highlights: • A new probe has been developed for on-site analyses of free metal ion. • A screen-printed electrode has been hyphenated to a DMT device. • Analysis time has been reduced to 6H against 36H when using a classical DMT device. • This new probe has been successfully applied on a synthetic freshwater sample.

  19. The Anderson impurity model out-of-equilibrium: Assessing the accuracy of simulation techniques with an exact current-occupation relation (United States)

    Agarwalla, Bijay Kumar; Segal, Dvira


    We study the interacting, symmetrically coupled single impurity Anderson model. By employing the nonequilibrium Green's function formalism, we reach an exact relationship between the steady-state charge current flowing through the impurity (dot) and its occupation. We argue that the steady-state current-occupation relation can be used to assess the consistency of simulation techniques and identify spurious transport phenomena. We test this relation in two different model variants: First, we study the Anderson-Holstein model in the strong electron-vibration coupling limit using the polaronic quantum master equation method. We find that the current-occupation relation is violated numerically in standard calculations, with simulations bringing up incorrect transport effects. Using a numerical procedure, we resolve the problem efficiently. Second, we simulate the Anderson model with electron-electron interaction on the dot using a deterministic numerically exact time-evolution scheme. Here, we observe that the current-occupation relation is satisfied in the steady-state limit—even before results converge to the exact limit.

  20. Computation of Isobaric Vapor-Liquid Equilibrium Data for Binary and Ternary Mixtures of Methanol, Water, and Ethanoic Acid from T, p, x, and HmE Measurements

    Directory of Open Access Journals (Sweden)

    Daming Gao


    Full Text Available Vapor-liquid equilibrium (VLE data for the strongly associated ternary system methanol + water + ethanoic acid and the three constituent binary systems have been determined by the total pressure-temperature-liquid-phase composition-molar excess enthalpy of mixing of the liquid phase (p, T, x, HmE for the binary systems using a novel pump ebulliometer at 101.325 kPa. The vapor-phase compositions of these binary systems had been calculated from Tpx and HmE based on the Q function of molar excess Gibbs energy through an indirect method. Moreover, the experimental T, x data are used to estimate nonrandom two-liquid (NRTL, Wilson, Margules, and van Laar model parameters, and these parameters in turn are used to calculate vapor-phase compositions. The activity coefficients of the solution were correlated with NRTL, Wilson, Margules, and van Laar models through fitting by least-squares method. The VLE data of the ternary system were well predicted from these binary interaction parameters of NRTL, Wilson, Margules, and van Laar model parameters without any additional adjustment to build the thermodynamic model of VLE for the ternary system and obtain the vapor-phase compositions and the calculated bubble points.

  1. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo


    of the retarded type. We apply the Waveform Relaxation algorithm, i.e., we provide a guess of the policy function and solve the resulting system of (deterministic) ordinary differential equations by standard techniques. For parametric restrictions, analytical solutions to the stochastic growth model and a novel......We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations...

  2. Application of computer techniques in repair of oblique facial clefts with outer-table calvarial bone grafts. (United States)

    Wang, Jue; Liu, Jian-feng; Liu, Wei; Wang, Jie-cong; Wang, Shi-yu; Gui, Lai


    This study focused on the application of computer-aided and rapid prototyping techniques in the repair of oblique facial clefts with outer-table calvarial bone. Five patients with oblique facial clefts underwent repair with outer-table calvarial bone. A mirror technique and rapid prototyping techniques were applied to design and prefabricate the individualized template for the preoperative repair of orbital inferior wall and maxillary anterior wall defects. Using computer software, the ideal region from which to take outer-table calvarial bone was located according to the size and surface curvature of the individualized template. During the operation, outer-table calvarial bone was fixed according to the shape of the individualized template, and bone onlay grafting was carried out after appropriate trimming. Surgical accuracy was evaluated by comparing the preoperative and postoperative 3-dimensional reconstructed images. With computer-aided and rapid prototyping techniques, all 5 patients had an ideal clinical outcome with few complications. The 3-dimensional preoperative design images and postoperative images fit well. Six-month to 8-year postoperative follow-up demonstrated that 4 patients had good aesthetic facial appearances and 1 had developed recurrence of lower eyelid shortage. Computer-aided and rapid prototyping techniques can offer surgeons the ability to accurately design individualized templates for craniofacial deformity and perform a simulated operation for greatly improved surgical accuracy. These techniques are useful treatment modalities in the surgical management of oblique facial clefts.

  3. Examination of China's performance and thematic evolution in quantum cryptography research using quantitative and computational techniques. (United States)

    Olijnyk, Nicholas V


    This study performed two phases of analysis to shed light on the performance and thematic evolution of China's quantum cryptography (QC) research. First, large-scale research publication metadata derived from QC research published from 2001-2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China's QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China's performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China's performance indicators (i.e., Publication Frequency, citation score, H-index) are growing. China's H-index (a normalized indicator) has surpassed all other countries' over the last several years. The second phase of analysis shows how China's main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures); some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state), while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation). Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China's QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology policy researchers

  4. Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles

    Directory of Open Access Journals (Sweden)

    M. Rahman


    Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the


    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu


    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  6. A technique for determination of lung outline and regional lung air volume distribution from computed tomography. (United States)

    Fleming, John; Conway, Joy; Majoral, Caroline; Bennett, Michael; Caillibotte, Georges; Montesantos, Spyridon; Katz, Ira


    Determination of the lung outline and regional lung air volume is of value in analysis of three-dimensional (3D) distribution of aerosol deposition from radionuclide imaging. This study describes a technique for using computed tomography (CT) scans for this purpose. Low-resolution CT scans of the thorax were obtained during tidal breathing in 11 healthy control male subjects on two occasions. The 3D outline of the lung was determined by image processing using minimal user interaction. A 3D map of air volume was derived and total lung air volume calculated. The regional distribution of air volume from center to periphery of the lung was analyzed using a radial transform and the outer-to-inner ratio of air volume determined. The average total air volume in the lung was 1,900±126 mL (1 SEM), which is in general agreement with the expected value for adult male subjects in the supine position. The fractional air volume concentration increased from the center toward the periphery of the lung. Outer-to-inner (O/I) ratios were higher for the left lung [11.5±1.8 (1 SD)] than for the right [10.1±0.8 (1 SD)] (plungs from CT images and obtaining an image of the distribution of air volume is described. The normal range of various parameters describing the regional distribution of air volume is presented, together with a measure of intrasubject repeatability. This technique and data will be of value in analyzing 3D radionuclide images of aerosol deposition.

  7. Novel computational and analytic techniques for nonlinear systems applied to structural and celestial mechanics (United States)

    Elgohary, Tarek Adel Abdelsalam

    In this Dissertation, computational and analytic methods are presented to address nonlinear systems with applications in structural and celestial mechanics. Scalar Homotopy Methods (SHM) are first introduced for the solution of general systems of nonlinear algebraic equations. The methods are applied to the solution of postbuckling and limit load problems of solids and structures as exemplified by simple plane elastic frames, considering only geometrical nonlinearities. In many problems, instead of simply adopting a root solving method, it is useful to study the particular problem in more detail in order to establish an especially efficient and robust method. Such a problem arises in satellite geodesy coordinate transformation where a new highly efficient solution, providing global accuracy with a non-iterative sequence of calculations, is developed. Simulation results are presented to compare the solution accuracy and algorithm performance for applications spanning the LEO-to-GEO range of missions. Analytic methods are introduced to address problems in structural mechanics and astrodynamics. Analytic transfer functions are developed to address the frequency domain control problem of flexible rotating aerospace structures. The transfer functions are used to design a Lyapunov stable controller that drives the spacecraft to a target position while suppressing vibrations in the flexible appendages. In astrodynamics, a Taylor series based analytic continuation technique is developed to address the classical two-body problem. A key algorithmic innovation for the trajectory propagation is that the classical averaged approximation strategy is replaced with a rigorous series based solution for exactly computing the acceleration derivatives. Evidence is provided to demonstrate that high precision solutions are easily obtained with the analytic continuation approach. For general nonlinear initial value problems (IVPs), the method of Radial Basis Functions time domain

  8. Evaluation of Computational Techniques for Parameter Estimation and Uncertainty Analysis of Comprehensive Watershed Models (United States)

    Yen, H.; Arabi, M.; Records, R.


    The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and

  9. The comparison of bolus tracking and test bolus techniques for computed tomography thoracic angiography in healthy beagles

    Directory of Open Access Journals (Sweden)

    Nicolette Cassel


    Full Text Available Computed tomography thoracic angiography studies were performed on five adult beagles using the bolus tracking (BT technique and the test bolus (TB technique, which were performed at least two weeks apart. For the BT technique, 2 mL/kg of 300 mgI/mL iodinated contrast agent was injected intravenously. Scans were initiated when the contrast in the aorta reached 150 Hounsfield units (HU. For the TB technique, the dogs received a test dose of 15% of 2 mL/kg of 300 mgI/mL iodinated contrast agent, followed by a series of low dose sequential scans. The full dose of the contrast agent was then administered and the scans were conducted at optimal times as identified from time attenuation curves. Mean attenuation in HU was measured in the aorta (Ao and right caudal pulmonary artery (rCPA. Additional observations included the study duration, milliAmpere (mA, computed tomography dose index volume (CTDI[vol] and dose length product (DLP. The attenuation in the Ao (BT = 660 52 HU ± 138 49 HU, TB = 469 82 HU ± 199 52 HU, p = 0.13 and in the rCPA (BT = 606 34 HU ± 143 37 HU, TB = 413 72 HU ± 174.99 HU, p = 0.28 did not differ significantly between the two techniques. The BT technique was conducted in a significantly shorter time period than the TB technique (p = 0.03. The mean mA for the BT technique was significantly lower than the TB technique (p = 0.03, as was the mean CTDI(vol (p = 0.001. The mean DLP did not differ significantly between the two techniques (p = 0.17. No preference was given to either technique when evaluating the Ao or rCPA but the BT technique was shown to be shorter in duration and resulted in less DLP than the TB technique.

  10. The comparison of bolus tracking and test bolus techniques for computed tomography thoracic angiography in healthy beagles

    Directory of Open Access Journals (Sweden)

    Nicolette Cassel


    Full Text Available Computed tomography thoracic angiography studies were performed on five adult beagles using the bolus tracking (BT technique and the test bolus (TB technique, which were performed at least two weeks apart. For the BT technique, 2 mL/kg of 300 mgI/mL iodinated contrast agent was injected intravenously. Scans were initiated when the contrast in the aorta reached 150 Hounsfield units (HU. For the TB technique, the dogs received a test dose of 15% of 2 mL/kg of 300 mgI/mL iodinated contrast agent, followed by a series of low dose sequential scans. The full dose of the contrast agent was then administered and the scans were conducted at optimal times as identified from time attenuation curves. Mean attenuation in HU was measured in the aorta (Ao and right caudal pulmonary artery (rCPA. Additional observations included the study duration, milliAmpere (mA, computed tomography dose index volume (CTDI[vol] and dose length product (DLP. The attenuation in the Ao (BT = 660 52 HU ± 138 49 HU, TB = 469 82 HU ± 199 52 HU, p = 0.13 and in the rCPA (BT = 606 34 HU ± 143 37 HU, TB = 413 72 HU ± 174.99 HU, p = 0.28 did not differ significantly between the two techniques. The BT technique was conducted in a significantly shorter time period than the TB technique (p = 0.03. The mean mA for the BT technique was significantly lower than the TB technique (p = 0.03, as was the mean CTDI(vol (p = 0.001. The mean DLP did not differ significantly between the two techniques (p = 0.17. No preference was given to either technique when evaluating the Ao or rCPA but the BT technique was shown to be shorter in duration and resulted in less DLP than the TB technique.

  11. Coronary plaque imaging with multislice computed tomography: technique and clinical applications

    Energy Technology Data Exchange (ETDEWEB)

    Cademartiri, F.; Palumbo, A.A. [Dept. of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Dept. of Radiology and Cardiology, Azienda Ospedaliero-Universitaria, Parma (Italy); La Grutta, L. [Dept. of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Dept. of Radiology, Policlinico P. Giaccone, Univ. of Palermo (Italy); Maffei, E. [Dept. of Radiology and Cardiology, Azienda Ospedaliero-Universitaria, Parma (Italy); Runza, G.; Bartolotta, T.V.; Midiri, M. [Dept. of Radiology, Policlinico P. Giaccone, Univ. of Palermo (Italy); Pugliese, F.; Mollet, N.R.A.; Krestin, G.P. [Dept. of Radiology, Erasmus Medical Center, Rotterdam (Netherlands)


    The composition of an atherosclerotic lesion, rather than solely the degree of stenosis, is considered to be an important determinant of acute coronary events. Whereas until recently only invasive techniques have been able to provide clues about plaque composition with consistent reproducibility, several recent studies have revealed the potential of multislice computed tomography (MSCT) for noninvasive plaque imaging. Coronary MSCT has the potential to detect coronary plaques and to characterize their composition based on the X-ray attenuating features of each structure. MSCT may also reveal the total plaque burden (calcified and non-calcified components) for individual patients with coronary atherosclerosis. However, several parameters (i.e. lumen attenuation, convolution filtering, body mass index of the patient, and contrast to noise ratio of the images) are able to modify the attenuation values that are used to define the composition of coronary plaques. The detection of vulnerable plaques will require more sophisticated scanners combined with newer software applications able to provide quantitative information. The aim of this article is to discuss the potential benefits and limitations of MSCT in coronary plaque imaging. (orig.)

  12. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques. (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad


    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  13. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations (United States)

    Navard, Sharon E.


    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  14. Design and manufacturing of patient-specific orthodontic appliances by computer-aided engineering techniques. (United States)

    Barone, Sandro; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano


    Orthodontic treatments are usually performed using fixed brackets or removable oral appliances, which are traditionally made from alginate impressions and wax registrations. Among removable devices, eruption guidance appliances are used for early orthodontic treatments in order to intercept and prevent malocclusion problems. Commercially available eruption guidance appliances, however, are symmetric devices produced using a few standard sizes. For this reason, they are not able to meet all the specific patient's needs since the actual dental anatomies present various geometries and asymmetric conditions. In this article, a computer-aided design-based methodology for the design and manufacturing of a patient-specific eruption guidance appliances is presented. The proposed approach is based on the digitalization of several steps of the overall process: from the digital reconstruction of patients' anatomies to the manufacturing of customized appliances. A finite element model has been developed to evaluate the temporomandibular joint disks stress level caused by using symmetric eruption guidance appliances with different teeth misalignment conditions. The developed model can then be used to guide the design of a patient-specific appliance with the aim at reducing the patient discomfort. At this purpose, two different customization levels are proposed in order to face both arches and single tooth misalignment issues. A low-cost manufacturing process, based on an additive manufacturing technique, is finally presented and discussed.

  15. Pinning technique for shoulder fractures in adolescents: computer modelling of percutaneous pinning of proximal humeral fractures (United States)

    Mehin, Ramin; Mehin, Afshin; Wickham, David; Letts, Merv


    Background In the technique of percuatenous pinning of proximal humerus fractures, the appropriate entry site and trajectory of pins is unknown, especially in the adolescent population. We sought to determine the ideal entry site and trajectory of pins. Methods We used magnetic resonance images of nonfractured shoulders in conjunction with radiographs of shoulder fractures that were treated with closed reduction and pinning to construct 3-dimensional computer-generated models. We used engineering software to determine the ideal location of pins. We also conducted a literature review. Results The nonfractured adolescent shoulder has an articular surface diameter of 41.3 mm, articular surface thickness of 17.4 mm and neck shaft angle of 36°. Although adolescents and adults have relatively similar shoulder skeletal anatomy, they suffer different types of fractures. In our study, 14 of 16 adolescents suffered Salter–Harris type II fractures. The ideal location for the lateral 2 pins in an anatomically reduced shoulder fracture is 4.4 cm and 8.0 cm from the proximal part of the humeral head directed at 21.2° in the coronal plane relative to the humeral shaft. Conclusion Operative management of proximal humerus fractures in adolescents requires knowledge distinct from that required for adult patients. This is the first study to examine the anatomy of the nonfractured proximal humerus in adolescents. This is also the first study to attempt to model the positioning of percutaneous proximal humerus pins. PMID:20011155

  16. Evaluation of efficacy of metal artefact reduction technique using contrast media in Computed Tomography (United States)

    Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah


    The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.

  17. Effects of preparation techniques on root canal shaping assessed by micro-computed tomography. (United States)

    Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Berisha, Merita


    Root canal shaping without any procedural error is of the utmost preference. Therefore, the purpose of this study was to use micro-computed tomography to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files. Sixty extracted upper second premolars were selected and were divided into 2 groups of 30. Before preparation, all samples were scanned by micro-CT. Then, 30 teeth were prepared with stainless steel files and the remaining 30 with ProTaper rotary files. Canal transportation and centering ability before and after root canal shaping were assessed using micro-CT. The amount and direction of canal transportation and the centering ratio of each instrument were determined in the coronal, middle, and apical parts of the canal. The 2 groups were statistically compared using one-way ANOVA. ProTaper rotary files gave less transportation (p<0.001) and better centering ability (p<0.00001) compared with stainless steel files. The manual technique for preparation of root canals with stainless steel files produces more canal transportation, whereas rotary files remain more centered in the canal.

  18. Epileptic seizure predictors based on computational intelligence techniques: a comparative study with 278 patients. (United States)

    Alexandre Teixeira, César; Direito, Bruno; Bandarabadi, Mojtaba; Le Van Quyen, Michel; Valderrama, Mario; Schelter, Bjoern; Schulze-Bonhage, Andreas; Navarro, Vincent; Sales, Francisco; Dourado, António


    The ability of computational intelligence methods to predict epileptic seizures is evaluated in long-term EEG recordings of 278 patients suffering from pharmaco-resistant partial epilepsy, also known as refractory epilepsy. This extensive study in seizure prediction considers the 278 patients from the European Epilepsy Database, collected in three epilepsy centres: Hôpital Pitié-là-Salpêtrière, Paris, France; Universitätsklinikum Freiburg, Germany; Centro Hospitalar e Universitário de Coimbra, Portugal. For a considerable number of patients it was possible to find a patient specific predictor with an acceptable performance, as for example predictors that anticipate at least half of the seizures with a rate of false alarms of no more than 1 in 6 h (0.15 h⁻¹). We observed that the epileptic focus localization, data sampling frequency, testing duration, number of seizures in testing, type of machine learning, and preictal time influence significantly the prediction performance. The results allow to face optimistically the feasibility of a patient specific prospective alarming system, based on machine learning techniques by considering the combination of several univariate (single-channel) electroencephalogram features. We envisage that this work will serve as benchmark data that will be of valuable importance for future studies based on the European Epilepsy Database. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)


    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  20. Computed tomography scanning techniques for the evaluation of cystic fibrosis lung disease. (United States)

    Robinson, Terry E


    Multidetector computed tomography (MDCT) scanners allow diagnosis and monitoring of cystic fibrosis (CF) lung disease at substantially lower radiation doses than with prior scanners. Complete spiral chest CT scans are accomplished in less than 10 seconds and scanner advances now allow the acquisition of comprehensive volumetric datasets for three-dimensional reconstruction of the lungs and airways. There are two types of CT scanning protocols currently used to assess CF lung disease: (1) high-resolution CT (HRCT) imaging, in which thin 0.5-1.5-mm slices are obtained every 0.5, 1, or 2 cm from apex to base for inspiratory scans, and limited, spaced HRCT slices obtained for expiratory scans; and (2) complete spiral CT imaging covering the entire lung for inspiratory and expiratory scanning. These scanning protocols allow scoring of CF lung disease and provide CT datasets to quantify airway and air-trapping measurements. CF CT scoring systems typically assess bronchiectasis, bronchial wall thickening, mucus plugging, and atelectasis/consolidation from inspiratory scans, whereas air trapping is scored from expiratory imaging. Recently, CT algorithms have been developed for both HRCT and complete spiral CT imaging to quantify several airway indices, to determine the volume and density of the lung, and to assess regional and global air trapping. CT scans are currently acquired by either controlled-volume scanning techniques (controlled-ventilation infant CT scanning or spirometer-controlled CT scanning in children and adults) or by voluntary breath holds at full inflation and deflation.

  1. On the thermodynamic equilibrium between (R)-2-hydroxyacyl-CoA and 2-enoyl-CoA. (United States)

    Parthasarathy, Anutthaman; Buckel, Wolfgang; Smith, David M


    A combined experimental and computational approach has been applied to investigate the equilibria between several alpha-hydroxyacyl-CoA compounds and their 2-enoyl-CoA derivatives. In contrast to those of their beta, gamma and delta counterparts, the equilibria for the alpha-compounds are relatively poorly characterized, but qualitatively they appear to be unusually sensitive to substituents. Using a variety of techniques, we have succeeded in measuring the equilibrium constants for the reactions beginning from 2-hydroxyglutaryl-CoA and lactyl-CoA. A complementary computational evaluation of the equilibrium constants shows quantitative agreement with the measured values. By examining the computational results, we arrive at an explanation of the substituent sensitivity and provide a prediction for the, as yet unmeasured, equilibrium involving 2-hydroxyisocaproyl-CoA.

  2. Non-Equilibrium Properties from Equilibrium Free Energy Calculations (United States)

    Pohorille, Andrew; Wilson, Michael A.


    Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.

  3. Prediction of Quality Features in Iberian Ham by Applying Data Mining on Data From MRI and Computer Vision Techniques


    Daniel Caballero; Andrés Caro; Trinidad Perez-Palacios; Pablo G. Rodriguez; Ramón Palacios


    This paper aims to predict quality features of Iberian hams by using non-destructive methods of analys is and data mining. Iberian hams were analyzed by Magn etic Resonance Imaging (MRI) and Computer Vision Techniques (CVT) throughout their ripening process and physico-chemical parameters from them were also measured. The obtained data were used to create an initial database. Deductive techniques ofdata mining (multiple linear reg...

  4. Flight Behaviors of a Complex Projectile Using a Coupled Computational Fluid Dynamics (CFD)-based Simulation Technique: Free Motion (United States)


    Projectile Using a Coupled Computational Fluid Dynamics (CFD)-based Simulation Technique: Free Motion 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...38 vi Preface The paper “Flight Behaviors of a Complex Projectile using a Coupled CFD-based Simulation Technique: Free Motion ” was...involves coupling of CFD and rigid body dynamics (RBD) codes for the simulation of projectile free flight motion in a time-accurate manner. This

  5. Securing Cloud Infrastructure for High Performance Scientific Computations Using Cryptographic Techniques


    Patra, G. K.; Nilotpal Chakraborty


    In today's scenario, a large scale of engineering and scientific applications requires high performance computation power in order to simulate various models. Scientific and Engineering models such as Climate Modeling, Weather Forecasting, Large Scale Ocean Modeling, Cyclone Prediction etc require parallel processing of data on high performance computing infrastructure. With the rise of cloud computing, it would be great if such high performance computations can be provided as a service to th...

  6. Benzodiazepine receptor equilibrium constants for flumazenil and midazolam determined in humans with the single photon emission computer tomography tracer [123I]iomazenil

    DEFF Research Database (Denmark)

    Videbaek, C; Friberg, L; Holm, S


    This study is based on the steady state method for the calculation of Kd values recently described by Lassen (J. Cereb. Blood Flow Metab. 12 (1992), 709), in which a constant infusion of the examined nonradioactive ligand is used with a bolus injection of tracer. Eight volunteers were examined...... twice, once without receptor blockade and once with a constant degree of partial blockade of the benzodiazepine receptors by infusion of nonradioactive flumazenil (Lanexat) or midazolam (Dormicum). Single photon emission computer tomography and blood sampling were performed intermittently for 6 h after...

  7. Success rates for computed tomography-guided musculoskeletal biopsies performed using a low-dose technique

    Energy Technology Data Exchange (ETDEWEB)

    Motamedi, Kambiz; Levine, Benjamin D.; Seeger, Leanne L.; McNitt-Gray, Michael F. [UCLA Health System, Radiology, Los Angeles, CA (United States)


    To evaluate the success rate of a low-dose (50 % mAs reduction) computed tomography (CT) biopsy technique. This protocol was adopted based on other successful reduced-CT radiation dose protocols in our department, which were implemented in conjunction with quality improvement projects. The technique included a scout view and initial localizing scan with standard dose. Additional scans obtained for further guidance or needle adjustment were acquired by reducing the tube current-time product (mAs) by 50 %. The radiology billing data were searched for CT-guided musculoskeletal procedures performed over a period of 8 months following the initial implementation of the protocol. These were reviewed for the type of procedure and compliance with the implemented protocol. The compliant CT-guided biopsy cases were then retrospectively reviewed for patient demographics, tumor pathology, and lesion size. Pathology results were compared to the ultimate diagnoses and were categorized as diagnostic, accurate, or successful. Of 92 CT-guided procedures performed during this period, two were excluded as they were not biopsies (one joint injection and one drainage), 19 were excluded due to non-compliance (operators neglected to follow the protocol), and four were excluded due to lack of available follow-up in our electronic medical records. A total of 67 compliant biopsies were performed in 63 patients (two had two biopsies, and one had three biopsies). There were 32 males and 31 females with an average age of 50 (range, 15-84 years). Of the 67 biopsies, five were non-diagnostic and inaccurate and thus unsuccessful (7 %); five were diagnostic but inaccurate and thus unsuccessful (7 %); 57 were diagnostic and accurate thus successful (85 %). These results were comparable with results published in the radiology literature. The success rate of CT-guided biopsies using a low-dose protocol is comparable to published rates for conventional dose biopsies. The implemented low-dose protocol

  8. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques (United States)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  9. Survey Research: Determining Sample Size and Representative Response. and The Effects of Computer Use on Keyboarding Technique and Skill. (United States)

    Wunsch, Daniel R.; Gades, Robert E.


    Two articles are presented. The first reviews and suggests procedures for determining appropriate sample sizes and for determining the response representativeness in survey research. The second presents a study designed to determine the effects of computer use on keyboarding technique and skill. (CT)

  10. Stature estimation from sternum length using computed tomography-volume rendering technique images of western Chinese. (United States)

    Zhang, Kui; Luo, Ying-zhen; Fan, Fei; Zheng, Jie-qian; Yang, Min; Li, Tao; Pang, Tao; Zhang, Jian; Deng, Zhen-hua


    The objective of the present investigation was to generate linear regression models for stature estimation on the basis of sternum length derived from computed tomography-volume rendering technique (CT-VRT) images for Western Chinese. The study sample comprised 288 individuals of Western Chinese, including 124 females and 164 males, with documented ages between 19 and 78 years, and was randomly divided into two subgroups. The linear regression analysis for the calibration sample data yielded the following formulae: male stature (cm) = 137.28 + 1.99*combined length of manubrium and mesosternum and female stature (cm) = 111.59 + 3.51* combined length of manubrium and mesosternum. Pearson's correlation coefficients for the regression models were r = 0.459 and r = 0.541 for the male and female formulae, respectively. The standard errors of the estimate (SEE) were 4.76 cm for the male equation and 6.73 cm for the female equation. The 95% confidence intervals of the predicted values encompassed the correct stature of all specimen in the validation sample. The regression equations derived from the sternum length in the present study can be used for stature estimation and the length of the sternum is a reliable predictor of stature in Chinese when better predictors of stature like the long bones are not available, and the CT-VRT method may be a practical method for stature estimation. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  11. Validation of a low dose simulation technique for computed tomography images.

    Directory of Open Access Journals (Sweden)

    Daniela Muenzel

    Full Text Available PURPOSE: Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT images from an original higher dose scan. MATERIALS AND METHODS: Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV of a swine were acquired (approved by the regional governmental commission for animal protection. Simulations of CT acquisition with a lower dose (simulated 10-80 mAs were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. RESULTS: Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2% and -0.2% (range -8.2% to 3.2%, respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise and 1.9-13.4 HU (CT values, without significant differences (p>0.05. Subjective observer evaluation of image appearance showed no visually detectable difference. CONCLUSION: Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.

  12. Non-equilibrium Economics

    Directory of Open Access Journals (Sweden)

    Katalin Martinás


    Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.

  13. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E


    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  14. Equilibrium Molecular Interactions in Pure Gases

    Directory of Open Access Journals (Sweden)

    Boris I. Sedunov


    Full Text Available The equilibrium molecular interactions in pure real gases are investigated based on the chemical thermodynamics principles. The parallels between clusters in real gases and chemical compounds in equilibrium media have been used to improve understanding of the real gas structure. A new approach to the equilibrium constants for the cluster fractions and new methods to compute them and their significant parameters from the experimental thermophysical data are developed. These methods have been applied to some real gases, such as Argon and Water vapors and gaseous Alkanes. It is shown that the four-particle clusters make a noticeable contribution in the thermophysical properties of the equilibrium Water vapor. It is shown also that the effective bond energy for dimers in Alkanes linearly grows with the number of carbon atoms in the molecule.

  15. Equilibrium Reconstruction on the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Samuel A. Lazerson, D. Gates, D. Monticello, H. Neilson, N. Pomphrey, A. Reiman S. Sakakibara, and Y. Suzuki


    Equilibrium reconstruction is commonly applied to axisymmetric toroidal devices. Recent advances in computational power and equilibrium codes have allowed for reconstructions of three-dimensional fields in stellarators and heliotrons. We present the first reconstructions of finite beta discharges in the Large Helical Device (LHD). The plasma boundary and magnetic axis are constrained by the pressure profile from Thomson scattering. This results in a calculation of plasma beta without a-priori assumptions of the equipartition of energy between species. Saddle loop arrays place additional constraints on the equilibrium. These reconstruction utilize STELLOPT, which calls VMEC. The VMEC equilibrium code assumes good nested flux surfaces. Reconstructed magnetic fields are fed into the PIES code which relaxes this constraint allowing for the examination of the effect of islands and stochastic regions on the magnetic measurements.

  16. ISORROPIA II: a computationally efficient thermodynamic equilibrium model for K+─Ca²+─Mg²+─NH4+─Na+─SO4²-─NO3-─Cl-─H2O aerosols

    Directory of Open Access Journals (Sweden)

    C. Fountoukis


    Full Text Available This study presents ISORROPIA II, a thermodynamic equilibrium model for the K+–Ca2+–Mg2+–NH4+–Na+–SO42−–NO3−–Cl−–H2O aerosol system. A comprehensive evaluation of its performance is conducted against water uptake measurements for laboratory aerosol and predictions of the SCAPE2 thermodynamic module over a wide range of atmospherically relevant conditions. The two models agree well, to within 13% for aerosol water content and total PM mass, 16% for aerosol nitrate and 6% for aerosol chloride and ammonium. Largest discrepancies were found under conditions of low RH, primarily from differences in the treatment of water uptake and solid state composition. In terms of computational speed, ISORROPIA II was more than an order of magnitude faster than SCAPE2, with robust and rapid convergence under all conditions. The addition of crustal species does not slow down the thermodynamic calculations (compared to the older ISORROPIA code because of optimizations in the activity coefficient calculation algorithm. Based on its computational rigor and performance, ISORROPIA II appears to be a highly attractive alternative for use in large scale air quality and atmospheric transport models.

  17. Exploiting stock data: a survey of state of the art computational techniques aimed at producing beliefs regarding investment portfolios

    Directory of Open Access Journals (Sweden)

    Mario Linares Vásquez


    Full Text Available Selecting an investment portfolio has inspired several models aimed at optimising the set of securities which an in-vesttor may select according to a number of specific decision criteria such as risk, expected return and planning hori-zon. The classical approach has been developed for supporting the two stages of portfolio selection and is supported by disciplines such as econometrics, technical analysis and corporative finance. However, with the emerging field of computational finance, new and interesting techniques have arisen in line with the need for the automatic processing of vast volumes of information. This paper surveys such new techniques which belong to the body of knowledge con-cerning computing and systems engineering, focusing on techniques particularly aimed at producing beliefs regar-ding investment portfolios.

  18. Local equilibrium in bird flocks (United States)

    Mora, Thierry; Walczak, Aleksandra M.; Del Castello, Lorenzo; Ginelli, Francesco; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano; Cavagna, Andrea; Giardina, Irene


    The correlated motion of flocks is an example of global order emerging from local interactions. An essential difference with respect to analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. This non-equilibrium characteristic has been studied theoretically, but its impact on actual animal groups remains to be fully explored experimentally. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accommodates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment occurs on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems.

  19. 18th International Workshop on Advanced Computing and Analysis Techniques in Physics Research

    CERN Document Server


    The 18th edition of ACAT will bring together experts to explore and confront the boundaries of computing, automated data analysis, and theoretical calculation technologies, in particle and nuclear physics, astronomy and astrophysics, cosmology, accelerator science and beyond. ACAT provides a unique forum where these disciplines overlap with computer science, allowing for the exchange of ideas and the discussion of cutting-edge computing, data analysis and theoretical calculation technologies in fundamental physics research.

  20. Development of technique for computing of activities’ effectiveness indicators of the university and its’ structural departments

    Directory of Open Access Journals (Sweden)

    Andrei V. Chernenkii


    Full Text Available Technique developed by authors on the base of analysis of well-known techniques for evaluation of activities and techniques for compilation of ratings of High school institutions is introduced, which allows to evaluate the effectiveness of activities of University’s departments by the set of indexes in numerical form, and to represent the results of evaluation in graphs. Technique passed the appraisal, and examples of obtained results are shown. Technique could be adopted to the specifics of concrete University as well as to changing of composition and contents of initial data.

  1. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique. (United States)

    Higaki, Toru; Tatsugami, Fuminari; Fujioka, Chikako; Sakane, Hiroaki; Nakamura, Yuko; Baba, Yasutaka; Iida, Makoto; Awai, Kazuo


    This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1-6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT). Hybrid iterative reconstruction (hybrid IR) and model-based iterative reconstruction (MBIR) were performed for the image reconstruction.

  2. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique


    Toru Higaki; Fuminari Tatsugami; Chikako Fujioka; Hiroaki Sakane; Yuko Nakamura; Yasutaka Baba; Makoto Iida; Kazuo Awai


    This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1?6?mm made by 3D printer was scanned using 320-row detector computed tomography (CT). Hybrid iterative reconstruction (hybrid IR) and model-based iterative reconstruction (MBIR) were performed for the image reconstruction.

  3. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique

    Directory of Open Access Journals (Sweden)

    Toru Higaki


    Full Text Available This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1–6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT. Hybrid iterative reconstruction (hybrid IR and model-based iterative reconstruction (MBIR were performed for the image reconstruction.

  4. Using Animation to Support the Teaching of Computer Game Development Techniques (United States)

    Taylor, Mark John; Pountney, David C.; Baskett, M.


    In this paper, we examine the potential use of animation for supporting the teaching of some of the mathematical concepts that underlie computer games development activities, such as vector and matrix algebra. An experiment was conducted with a group of UK undergraduate computing students to compare the perceived usefulness of animated and static…

  5. Computational methods for molecular structure determination: theory and technique. NRCC Proceedings No. 8

    Energy Technology Data Exchange (ETDEWEB)


    Goal of this workshop was to provide an introduction to the use of state-of-the-art computer codes for the semi-empirical and ab initio computation of the electronic structure and geometry of small and large molecules. The workshop consisted of 15 lectures on the theoretical foundations of the codes, followed by laboratory sessions which utilized these codes.

  6. The equilibrium theory of inhomogeneous polymers (international series of monographs on physics)

    CERN Document Server

    Fredrickson, Glenn


    The Equilibrium Theory of Inhomogeneous Polymers provides an introduction to the field-theoretic methods and computer simulation techniques that are used in the design of structured polymeric fluids. By such methods, the principles that dictate equilibrium self-assembly in systems ranging from block and graft copolymers, to polyelectrolytes, liquid crystalline polymers, and polymer nanocomposites can be established. Building on an introductory discussion of single-polymer statistical mechanics, the book provides a detailed treatment of analytical and numerical techniques for addressing the conformational properties of polymers subjected to spatially-varying potential fields. This problem is shown to be central to the field-theoretic description of interacting polymeric fluids, and models for a number of important polymer systems are elaborated. Chapter 5 serves to unify and expound the topic of self-consistent field theory, which is a collection of analytical and numerical techniques for obtaining solutions o...

  7. Predicting daily ragweed pollen concentrations using Computational Intelligence techniques over two heavily polluted areas in Europe. (United States)

    Csépe, Zoltán; Makra, László; Voukantsis, Dimitris; Matyasovszky, István; Tusnády, Gábor; Karatzas, Kostas; Thibaudon, Michel


    Forecasting ragweed pollen concentration is a useful tool for sensitive people in order to prepare in time for high pollen episodes. The aim of the study is to use methods of Computational Intelligence (CI) (Multi-Layer Perceptron, M5P, REPTree, DecisionStump and MLPRegressor) for predicting daily values of Ambrosia pollen concentrations and alarm levels for 1-7 days ahead for Szeged (Hungary) and Lyon (France), respectively. Ten-year daily mean ragweed pollen data (within 1997-2006) are considered for both cities. 10 input variables are used in the models including pollen level or alarm level on the given day, furthermore the serial number of the given day of the year within the pollen season and altogether 8 meteorological variables. The study has novelties as (1) daily alarm thresholds are firstly predicted in the aerobiological literature; (2) data-driven modelling methods including neural networks have never been used in forecasting daily Ambrosia pollen concentration; (3) algorithm J48 has never been used in palynological forecasts; (4) we apply a rarely used technique, namely factor analysis with special transformation, to detect the importance of the influencing variables in defining the pollen levels for 1-7 days ahead. When predicting pollen concentrations, for Szeged Multi-Layer Perceptron models deliver similar results with tree-based models 1 and 2 days ahead; while for Lyon only Multi-Layer Perceptron provides acceptable result. When predicting alarm levels, the performance of Multi-Layer Perceptron is the best for both cities. It is presented that the selection of the optimal method depends on climate, as a function of geographical location and relief. The results show that the more complex CI methods perform well, and their performance is case-specific for ≥2 days forecasting horizon. A determination coefficient of 0.98 (Ambrosia, Szeged, one day and two days ahead) using Multi-Layer Perceptron ranks this model the best one in the literature

  8. Non-Monte Carlo formulations and computational techniques for the stochastic non-linear Schrödinger equation (United States)

    Demir, Alper


    Stochastic ordinary and partial differential equations (SOPDEs) in various forms arise and are successfully utilized in the modeling of a variety of physical and engineered systems such as telecommunication systems, electronic circuits, cosmological systems, financial systems, meteorological and climate systems. While the theory of stochastic partial and especially ordinary differential equations is more or less well understood, there has been much less work on practical formulations and computational approaches to solving these equations. In this paper, we concentrate on the stochastic non-linear Schrödinger equation (SNLSE) that arises in the analysis of wave propagation phenomena, mainly motivated by its predominant role as a modeling tool in the design of optically amplified long distance fiber telecommunication systems. We present novel formulations and computational methods for the stochastic characterization of the solution of the SNLSE. Our formulations and techniques are not aimed at computing individual realizations, i.e., sample paths, for the solution of the SNLSE á la Monte Carlo. Instead, starting with the SNLSE, we derive new systems of differential equations and develop associated computational techniques. The numerical solutions of these new equations directly produce the ensemble-averaged stochastic characterization desired for the solution of the SNLSE, in a non-Monte Carlo manner without having to compute many realizations needed for ensemble-averaging.

  9. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro


    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  10. Evolution and non-equilibrium physics

    DEFF Research Database (Denmark)

    Becker, Nikolaj; Sibani, Paolo


    We argue that the stochastic dynamics of interacting agents which replicate, mutate and die constitutes a non-equilibrium physical process akin to aging in complex materials. Specifically, our study uses extensive computer simulations of the Tangled Nature Model (TNM) of biological evolution...

  11. A 3D edge detection technique for surface extraction in computed tomography for dimensional metrology applications

    DEFF Research Database (Denmark)

    Yagüe-Fabra, J.A.; Ontiveros, S.; Jiménez, R.


    Many factors influence the measurement uncertainty when using computed tomography for dimensional metrology applications. One of the most critical steps is the surface extraction phase. An incorrect determination of the surface may significantly increase the measurement uncertainty. This paper...

  12. Unscented Sampling Techniques For Evolutionary Computation With Applications To Astrodynamic Optimization (United States)


    constrained optimization problems. The second goal is to improve computation times and efficiencies associated with evolutionary algorithms. The last goal both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified...computation, parallel processing, un - scented sampling 15. NUMBER OF PAGES 417 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18

  13. Real-Time Software Vulnerabilities in Cloud Computing : Challenges and Mitigation Techniques


    Okonoboh, Matthias Aifuobhokhan; Tekkali, Sudhakar


    Context: Cloud computing is rapidly emerging in the area of distributed computing. In the meantime, many organizations also attributed the technology to be associated with several business risks which are yet to be resolved. These challenges include lack of adequate security, privacy and legal issues, resource allocation, control over data, system integrity, risk assessment, software vulnerabilities and so on which all have compromising effect in cloud environment. Organizations based their w...

  14. Beyond Equilibrium Thermodynamics (United States)

    Öttinger, Hans Christian


    Beyond Equilibrium Thermodynamics fills a niche in the market by providing a comprehensive introduction to a new, emerging topic in the field. The importance of non-equilibrium thermodynamics is addressed in order to fully understand how a system works, whether it is in a biological system like the brain or a system that develops plastic. In order to fully grasp the subject, the book clearly explains the physical concepts and mathematics involved, as well as presenting problems and solutions; over 200 exercises and answers are included. Engineers, scientists, and applied mathematicians can all use the book to address their problems in modelling, calculating, and understanding dynamic responses of materials.

  15. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms


    Alsmadi, Othman M.K.; Zaer S. Abo-Hammour


    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maint...

  16. Adaptive, multi-domain techniques for two-phase flow computations (United States)

    Uzgoren, Eray

    Computations of immiscible two-phase flows deal with interfaces that may move and/or deform in response to the dynamics within the flow field. As interfaces move, one needs to compute the new shapes and the associated geometric information (such as curvatures, normals, and projected areas/volumes) as part of the solution. The present study employs the immersed boundary method (IBM), which uses marker points to track the interface location and continuous interface methods to model interfacial conditions. The large transport property jumps across the interface, and the considerations of the mechanism including convection, diffusion, pressure, body force and surface tension create multiple time/length scales. The resulting computational stiffness and moving boundaries make numerical simulations computationally expensive in three-dimensions, even when the computations are performed on adaptively refined 3D Cartesian grids that efficiently resolve the length scales. A domain decomposition method and a partitioning strategy for adaptively refined grids are developed to enable parallel computing capabilities. Specifically, the approach consists of multilevel additive Schwarz method for domain decomposition, and Hilbert space filling curve ordering for partitioning. The issues related to load balancing, communication and computation, convergence rate of the iterative solver in regard to grid size and the number of sub-domains and interface shape deformation, are studied. Moreover, interfacial representation using marker points is extended to model complex solid geometries for single and two-phase flows. Developed model is validated using a benchmark test case, flow over a cylinder. Furthermore, overall algorithm is employed to further investigate steady and unsteady behavior of the liquid plug problem. Finally, capability of handling two-phase flow simulations in complex solid geometries is demonstrated by studying the effect of bifurcation point on the liquid plug, which

  17. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  18. Equilibrium shoreface profiles

    DEFF Research Database (Denmark)

    Aagaard, Troels; Hughes, Michael G


    Large-scale coastal behaviour models use the shoreface profile of equilibrium as a fundamental morphological unit that is translated in space to simulate coastal response to, for example, sea level oscillations and variability in sediment supply. Despite a longstanding focus on the shoreface prof...

  19. On Quantum Microcanonical Equilibrium


    Dorje C. Brody; Hook, Daniel W.; Hughston, Lane P.


    A quantum microcanonical postulate is proposed as a basis for the equilibrium properties of small quantum systems. Expressions for the corresponding density of states are derived, and are used to establish the existence of phase transitions for finite quantum systems. A grand microcanonical ensemble is introduced, which can be used to obtain new rigorous results in quantum statistical mechanics. Accepted version

  20. On quantum microcanonical equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Brody, Dorje C [Department of Mathematics, Imperial College, London SW7 2BZ (United Kingdom); Hook, Daniel W [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Hughston, Lane P [Department of Mathematics, King' s College London, The Strand, London WC2R 2LS (United Kingdom)


    A quantum microcanonical postulate is proposed as a basis for the equilibrium properties of small quantum systems. Expressions for the corresponding density of states are derived, and are used to establish the existence of phase transitions for finite quantum systems. A grand microcanonical ensemble is introduced, which can be used to obtain new rigorous results in quantum statistical mechanics.

  1. Differential Equation of Equilibrium

    African Journals Online (AJOL)


    ABSTRACT. Analysis of underground circular cylindrical shell is carried out in this work. The forth order differential equation of equilibrium, comparable to that of beam on elastic foundation, was derived from static principles on the assumptions of P. L Pasternak. Laplace transformation was used to solve the governing ...

  2. Volatility in Equilibrium

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Sizova, Natalia; Tauchen, George

    Stock market volatility clusters in time, carries a risk premium, is fractionally inte- grated, and exhibits asymmetric leverage effects relative to returns. This paper develops a first internally consistent equilibrium based explanation for these longstanding empirical facts. The model is cast...

  3. Differential Equation of Equilibrium

    African Journals Online (AJOL)


    differential equation of equilibrium, comparable to that of beam on elastic foundation, was derived from static principles on the ... tedious and more time saving than the classical method in the solution of the aforementioned differential equation. ... silos, pipelines, bridge arches or wind turbine towers [3]. The objective of this ...

  4. Microeconomics : Equilibrium and Efficiency

    NARCIS (Netherlands)

    Ten Raa, T.


    Microeconomics: Equilibrium and Efficiency teaches how to apply microeconomic theory in an innovative, intuitive and concise way. Using real-world, empirical examples, this book not only covers the building blocks of the subject, but helps gain a broad understanding of microeconomic theory and

  5. Pinch technique prescription to compute the electroweak corrections to the muon anomalous magnetic moment


    Cabral-Rosetti, L.G.; Lopez Castro, G.; Pestieau, Jean


    We apply a simple prescription derived from the framework of the Pinch Technique formalism to check the calculation of the gauge-invariant one-loop bosonic electroweak corrections to the muon anomalous magnetic moment.

  6. Data mining techniques used to analyze students’ opinions about computization in the educational system

    Directory of Open Access Journals (Sweden)

    Nicoleta PETCU


    Full Text Available Both the educational process and the research one, together with institutional management are unthinkable without the information technologies. Thru them one can harness the work capacity and creativity of both students and professors. The aim of this paper is to present the results of a quantitative research regarding: scope of using computers, the importance of using them, faculty activities that involve computer usage, number of hours students work with them at university, Internet and web-sites usage, e-learning platforms, investments in technology in the faculty and access to computers and other IT resources. The major conclusions of this research allow us to propose strategies for increasing the quality, efficiency and transparency of didactic, scientific, administrative and communication processes.

  7. Simulation and comparison of equilibrium and nonequilibrium stage ...

    African Journals Online (AJOL)

    In the present study, two distinctly different approaches are followed for modelling of reactive distillation column, the equilibrium stage model and the nonequilibrium stage model. These models are simulated with a computer code developed in the present study using MATLAB programming. In the equilibrium stage models, ...

  8. Description of the General Equilibrium Model of Ecosystem Services (GEMES) (United States)

    Travis Warziniack; David Finnoff; Jenny Apriesnig


    This paper serves as documentation for the General Equilibrium Model of Ecosystem Services (GEMES). GEMES is a regional computable general equilibrium model that is composed of values derived from natural capital and ecosystem services. It models households, producing sectors, and governments, linked to one another through commodity and factor markets. GEMES was...

  9. Plasma equilibrium calculation in J-TEXT tokamak (United States)

    Hailong, GAO; Tao, XU; Zhongyong, CHEN; Ge, ZHUANG


    Plasma equilibrium has been calculated using an analytical method. The plasma profiles of the current density, safety factor, pressure and magnetic surface function are obtained. The analytical solution of the Grad-Shafranov (GS) equation is obtained by the variable separation method and compared with the computed results of the equilibrium fitting code EFIT.

  10. Emission computer tomography on a Dodewaard mixed oxide fuel pin. Comparative PIE work with non-destructive and destructive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buurveld, H.A.; Dassel, G.


    A nondestructive technique as well as a destructive PIE technique have been used to verify the results obtained with a newly 8-e computer tomography (GECT) system. Multi isotope Scanning (MIS), electron probe micro analysis (EPMA) and GECT were used on a mixed oxide (MOX) fuel rod from the Dodewaard reactor with an average burnup of 24 MWd/kg fuel. GECT shows migration of Cs to the periphery of fuel pellets and to radial cracks and pores in the fuel, whereas MIS shows Cs migration to pellet interfaces. The EPMA technique appeared not to be useful to show migration of Cs but, it shows the distribution of fission products from Pu. EPMA clearly shows the distribution of fission products from Pu, but did not reveal the Cs-migration. (orig./HP)

  11. The equilibrium of overpressurized polytropes (United States)

    Huré, J.-M.; Hersant, F.; Nasello, G.


    We investigate the impact of an external pressure on the structure of self-gravitating polytropes for axially symmetric ellipsoids and rings. The confinement of the fluid by photons is accounted for through a boundary condition on the enthalpy H. Equilibrium configurations are determined numerically from a generalized `self-consistent-field' method. The new algorithm incorporates an intraloop re-scaling operator R(H), which is essential for both convergence and getting self-normalized solutions. The main control parameter is the external-to-core enthalpy ratio. In the case of uniform rotation rate and uniform surrounding pressure, we compute the mass, the volume, the rotation rate and the maximum enthalpy. This is repeated for a few polytropic indices, n. For a given axial ratio, overpressurization globally increases all output quantities, and this is more pronounced for large n. Density profiles are flatter than in the absence of an external pressure. When the control parameter asymptotically tends to unity, the fluid converges towards the incompressible solution, whatever the index, but becomes geometrically singular. Equilibrium sequences, obtained by varying the axial ratio, are built. States of critical rotation are greatly exceeded or even disappear. The same trends are observed with differential rotation. Finally, the typical response to a photon point source is presented. Strong irradiation favours sharp edges. Applications concern star-forming regions and matter orbiting young stars and black holes.

  12. Regression Computer Programs for Setwise Regression and Three Related Analysis of Variance Techniques. (United States)

    Williams, John D.; Lindem, Alfred C.

    Four computer programs using the general purpose multiple linear regression program have been developed. Setwise regression analysis is a stepwise procedure for sets of variables; there will be as many steps as there are sets. Covarmlt allows a solution to the analysis of covariance design with multiple covariates. A third program has three…

  13. Investigating the effect of different terrain modeling techniques on the computation of local gravity anomalies (United States)

    Tsoulis, Dimitrios; Patlakis, Konstantinos


    Gravity reductions and gravity anomalies express important tools for the analysis and interpretation of real gravity measurements at all spatial scales. Simple geometries of planar or spherical slabs for the topographic masses underlying the computation point down to a reference height surface produce the traditional definition of simple Bouguer anomalies. However, especially for gravity measurements obtained from local gravity surveys stretching up to only a few tens of kilometers, a detailed consideration of the deviations of the surface topographic relief from the ideal slab geometry is required and necessary in order to obtain the so-called refined Bouguer anomalies. The present contribution examines the further refinement of these computations depending on the exact geometric representation of the topographic surface and the corresponding masses defining the terrain correction quantity. Using as input data 328 surface gravity observations and a 20 km x 15 km Digital Terrain Model with a 50 m x 50 m spatial resolution of a steep terrain area in the Bavarian Alps different sets of gravity anomalies were computed from different geometrical and mathematical approximations of the topographic masses and its corresponding gravitational effect. Right rectangular prisms, polyhedrons, bilinear surfaces, mass-line and mass-prism FFT representations of the terrain effect have been implemented for the evaluation of refined Bouguer gravity anomalies over the 20 km x 15 km region and the computed grids have been compared both against each other as well as with respect to the topographic height.

  14. Investigation and Study of Computational Techniques for Design and Fabrication of Integrated Electronic Circuits (United States)


    that this aspect nf TP ^ aspect of IC design can be accomplished on a computer prior to committing a new product to manufa -uring. This procedure...amount of CPU time (he claimed one week). For this reason, Motorola is presently working on this problem, but v/ith little success. It was stated that

  15. A review of Computational Intelligence techniques in coral reef-related applications

    NARCIS (Netherlands)

    Salcedo-Sanz, S.; Cuadra, L.; Vermeij, M.J.A.

    Studies on coral reefs increasingly combine aspects of science and technology to understand the complex dynamics and processes that shape these benthic ecosystems. Recently, the use of advanced computational algorithms has entered coral reef science as new powerful tools that help solve complex

  16. Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong


    , multi-body, or finite-element approach to model the turbine structural dynamics. The present paper describes a novel fluid-structure coupling technique which combines a threedimensional viscous-inviscid solver for horizontal-axis wind-turbine aerodynamics, called MIRAS, and the structural dynamics model......Fluid-structure interaction simulations are routinely used in the wind energy industry to evaluate the aerodynamic and structural dynamic performance of wind turbines. Most aero-elastic codes in modern times implement a blade element momentum technique to model the rotor aerodynamics and a modal...

  17. Entropy production in a fluid-solid system far from thermodynamic equilibrium. (United States)

    Chung, Bong Jae; Ortega, Blas; Vaidya, Ashwin


    The terminal orientation of a rigid body in a moving fluid is an example of a dissipative system, out of thermodynamic equilibrium and therefore a perfect testing ground for the validity of the maximum entropy production principle (MaxEP). Thus far, dynamical equations alone have been employed in studying the equilibrium states in fluid-solid interactions, but these are far too complex and become analytically intractable when inertial effects come into play. At that stage, our only recourse is to rely on numerical techniques which can be computationally expensive. In our past work, we have shown that the MaxEP is a reliable tool to help predict orientational equilibrium states of highly symmetric bodies such as cylinders, spheroids and toroidal bodies. The MaxEP correctly helps choose the stable equilibrium in these cases when the system is slightly out of thermodynamic equilibrium. In the current paper, we expand our analysis to examine i) bodies with fewer symmetries than previously reported, for instance, a half-ellipse and ii) when the system is far from thermodynamic equilibrium. Using two-dimensional numerical studies at Reynolds numbers ranging between 0 and 14, we examine the validity of the MaxEP. Our analysis of flow past a half-ellipse shows that overall the MaxEP is a good predictor of the equilibrium states but, in the special case of the half-ellipse with aspect ratio much greater than unity, the MaxEP is replaced by the Min-MaxEP, at higher Reynolds numbers when inertial effects come into play. Experiments in sedimentation tanks and with hinged bodies in a flow tank confirm these calculations.

  18. A new nonlocal thermodynamical equilibrium radiative transfer method for cool stars. Method and numerical implementation (United States)

    Lambert, J.; Josselin, E.; Ryde, N.; Faure, A.


    Context. The solution of the nonlocal thermodynamical equilibrium (non-LTE) radiative transfer equation usually relies on stationary iterative methods, which may falsely converge in some cases. Furthermore, these methods are often unable to handle large-scale systems, such as molecular spectra emerging from, for example, cool stellar atmospheres. Aims: Our objective is to develop a new method, which aims to circumvent these problems, using nonstationary numerical techniques and taking advantage of parallel computers. Methods: The technique we develop may be seen as a generalization of the coupled escape probability method. It solves the statistical equilibrium equations in all layers of a discretized model simultaneously. The numerical scheme adopted is based on the generalized minimum residual method. Results: The code has already been applied to the special case of the water spectrum in a red supergiant stellar atmosphere. This demonstrates the fast convergence of this method, and opens the way to a wide variety of astrophysical problems.

  19. Forensic examination of computer-manipulated documents using image processing techniques

    Directory of Open Access Journals (Sweden)

    Komal Saini


    Full Text Available The recent exponential growth in the use of image processing software applications has been accompanied by a parallel increase in their use in criminal activities. Image processing tools have been associated with a variety of crimes, including counterfeiting of currency notes, cheques, as well as manipulation of important government documents, wills, financial deeds or educational certificates. Thus, it is important for the Document Examiner to keep up to date with latest technological and scientific advances in the field. The present research focuses on the use of image processing tools for the examination of computer-manipulated documents. The altered documents were examined using a suite of currently available image processing tools. The results demonstrate that a number of tools are capable of detecting computer-based manipulations of written documents.

  20. A comparison of several computational auditory scene analysis (CASA) techniques for monaural speech segregation. (United States)

    Zeremdini, Jihen; Ben Messaoud, Mohamed Anouar; Bouzid, Aicha


    Humans have the ability to easily separate a composed speech and to form perceptual representations of the constituent sources in an acoustic mixture thanks to their ears. Until recently, researchers attempt to build computer models of high-level functions of the auditory system. The problem of the composed speech segregation is still a very challenging problem for these researchers. In our case, we are interested in approaches that are addressed to the monaural speech segregation. For this purpose, we study in this paper the computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. CASA is the reproduction of the source organization achieved by listeners. It is based on two main stages: segmentation and grouping. In this work, we have presented, and compared several studies that have used CASA for speech separation and recognition.

  1. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz


    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  2. Determination of Radiative Heat Transfer Coefficient at High Temperatures Using a Combined Experimental-Computational Technique (United States)

    Kočí, Václav; Kočí, Jan; Korecký, Tomáš; Maděra, Jiří; Černý, Robert Č.


    The radiative heat transfer coefficient at high temperatures is determined using a combination of experimental measurement and computational modeling. In the experimental part, cement mortar specimen is heated in a laboratory furnace to 600°C and the temperature field inside is recorded using built-in K-type thermocouples connected to a data logger. The measured temperatures are then used as input parameters in the three dimensional computational modeling whose objective is to find the best correlation between the measured and calculated data via four free parameters, namely the thermal conductivity of the specimen, effective thermal conductivity of thermal insulation, and heat transfer coefficients at normal and high temperatures. The optimization procedure which is performed using the genetic algorithms provides the value of the high-temperature radiative heat transfer coefficient of 3.64 W/(m2K).

  3. In Vivo EPR Resolution Enhancement Using Techniques Known from Quantum Computing Spin Technology. (United States)

    Rahimi, Robabeh; Halpern, Howard J; Takui, Takeji


    A crucial issue with in vivo biological/medical EPR is its low signal-to-noise ratio, giving rise to the low spectroscopic resolution. We propose quantum hyperpolarization techniques based on 'Heat Bath Algorithmic Cooling', allowing possible approaches for improving the resolution in magnetic resonance spectroscopy and imaging.

  4. Optimization of the cumulative risk assessment of pesticides and biocides using computational techniques: Pilot project

    DEFF Research Database (Denmark)

    Jonsdottir, Svava Osk; Reffstrup, Trine Klein; Petersen, Annette

    This pilot project is intended as the first step in developing a computational strategy to assist in refining methods for higher tier cumulative and aggregate risk assessment of exposure to mixture of pesticides and biocides. For this purpose, physiologically based toxicokinetic (PBTK) models were...... the models. Exposure scenarios were constructed based on findings of pesticide residues in food of ordinary consumers, and assessment of dermal exposure of professional workers. PBTK simulations were carried using these scenarios....

  5. A micro computer based procurement system: an application of reverse engineering techniques


    Skrtich, George T.; Delaney, Daniel E.


    Approved for public release; distribution is unlimited. The Department of the Navy has developed a system called the Automation of Procurement and Accounting Data Entry (APADE), which automates the procurement of nonstandard materials. Small Navy Field contracting locations, however, cannot afford to utilize this service, and the Navy currently has no standard micro computer software for such procurement. This thesis analyzes and reviews the Navy's APADE procurement system using a reverse ...

  6. Communication: Microphase equilibrium and assembly dynamics (United States)

    Zhuang, Yuan; Charbonneau, Patrick


    Despite many attempts, ordered equilibrium microphases have yet to be obtained in experimental colloidal suspensions. The recent computation of the equilibrium phase diagram of a microscopic, particle-based microphase former [Zhuang et al., Phys. Rev. Lett. 116, 098301 (2016)] has nonetheless found such mesoscale assemblies to be thermodynamically stable. Here, we consider their equilibrium and assembly dynamics. At intermediate densities above the order-disorder transition, we identify four different dynamical regimes and the structural changes that underlie the dynamical crossovers from one disordered regime to the next. Below the order-disorder transition, we also find that periodic lamellae are the most dynamically accessible of the periodic microphases. Our analysis thus offers a comprehensive view of the dynamics of disordered microphases and a route to the assembly of periodic microphases in a putative well-controlled, experimental system.

  7. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms. (United States)

    Alsmadi, Othman M K; Abo-Hammour, Zaer S


    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  8. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi


    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  9. A Novel Technique to Compute the Revisit Time of Satellites and Its Application in Remote Sensing Satellite Optimization Design

    Directory of Open Access Journals (Sweden)

    Xin Luo


    Full Text Available This paper proposes a novel technique to compute the revisit time of satellites within repeat ground tracks. Different from the repeat cycle which only depends on the orbit, the revisit time is relevant to the payload of the satellite as well, such as the tilt angle and swath width. The technique is discussed using the Bezout equation and takes the gravitational second zonal harmonic into consideration. The concept of subcycles is defined in a general way and the general concept of “small” offset is replaced by a multiple of the minimum interval on equator when analyzing the revisit time of remote sensing satellites. This technique requires simple calculations with high efficiency. At last, this technique is used to design remote sensing satellites with desired revisit time and minimum tilt angle. When the side-lap, the range of altitude, and desired revisit time are determined, a lot of orbit solutions which meet the mission requirements will be obtained fast. Among all solutions, designers can quickly find out the optimal orbits. Through various case studies, the calculation technique is successfully demonstrated.

  10. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee


    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  11. Fast Discrete Fourier Transform Computations Using the Reduced Adder Graph Technique

    Directory of Open Access Journals (Sweden)

    Dempster Andrew G


    Full Text Available It has recently been shown that the -dimensional reduced adder graph (RAG- technique is beneficial for many DSP applications such as for FIR and IIR filters, where multipliers can be grouped in multiplier blocks. This paper highlights the importance of DFT and FFT as DSP objects and also explores how the RAG- technique can be applied to these algorithms. This RAG- DFT will be shown to be of low complexity and possess an attractively regular VLSI data flow when implemented with the Rader DFT algorithm or the Bluestein chirp- algorithm. ASIC synthesis data are provided and demonstrate the low complexity and high speed of the design when compared to other alternatives.

  12. Application of AI techniques to a voice-actuated computer system for reconstructing and displaying magnetic resonance imaging data (United States)

    Sherley, Patrick L.; Pujol, Alfonso, Jr.; Meadow, John S.


    To provide a means of rendering complex computer architectures languages and input/output modalities transparent to experienced and inexperienced users research is being conducted to develop a voice driven/voice response computer graphics imaging system. The system will be used for reconstructing and displaying computed tomography and magnetic resonance imaging scan data. In conjunction with this study an artificial intelligence (Al) control strategy was developed to interface the voice components and support software to the computer graphics functions implemented on the Sun Microsystems 4/280 color graphics workstation. Based on generated text and converted renditions of verbal utterances by the user the Al control strategy determines the user''s intent and develops and validates a plan. The program type and parameters within the plan are used as input to the graphics system for reconstructing and displaying medical image data corresponding to that perceived intent. If the plan is not valid the control strategy queries the user for additional information. The control strategy operates in a conversation mode and vocally provides system status reports. A detailed examination of the various AT techniques is presented with major emphasis being placed on their specific roles within the total control strategy structure. 1.

  13. Stimulus and correlation matching measurement technique in computer based characterization testing


    Dorman, A M


    Constructive theory of characterization test is considered. The theory is applicable to a nano devices characterization: current-voltage, Auger current dependence. Generally small response of device under test on an applied stimulus is masked by an unknown deterministic background and a random noise. Characterization test in this signal corruption scenario should be based on correlation measurement technique of device response on applied optimal stimulus with optimal reference signal. Co-synt...

  14. A robust computational technique for a system of singularly perturbed reaction–diffusion equations

    Directory of Open Access Journals (Sweden)

    Kumar Vinod


    Full Text Available In this paper, a singularly perturbed system of reaction–diffusion Boundary Value Problems (BVPs is examined. To solve such a type of problems, a Modified Initial Value Technique (MIVT is proposed on an appropriate piecewise uniform Shishkin mesh. The MIVT is shown to be of second order convergent (up to a logarithmic factor. Numerical results are presented which are in agreement with the theoretical results.

  15. Detection of plant leaf diseases using image segmentation and soft computing techniques

    Directory of Open Access Journals (Sweden)

    Vijai Singh


    Full Text Available Agricultural productivity is something on which economy highly depends. This is the one of the reasons that disease detection in plants plays an important role in agriculture field, as having disease in plants are quite natural. If proper care is not taken in this area then it causes serious effects on plants and due to which respective product quality, quantity or productivity is affected. For instance a disease named little leaf disease is a hazardous disease found in pine trees in United States. Detection of plant disease through some automatic technique is beneficial as it reduces a large work of monitoring in big farms of crops, and at very early stage itself it detects the symptoms of diseases i.e. when they appear on plant leaves. This paper presents an algorithm for image segmentation technique which is used for automatic detection and classification of plant leaf diseases. It also covers survey on different diseases classification techniques that can be used for plant leaf disease detection. Image segmentation, which is an important aspect for disease detection in plant leaf disease, is done by using genetic algorithm.

  16. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography (United States)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong


    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  17. An Encryption Technique for Provably Secure Transmission from a High Performance Computing Entity to a Tiny One

    Directory of Open Access Journals (Sweden)

    Miodrag J. Mihaljević


    Full Text Available An encryption/decryption approach is proposed dedicated to one-way communication between a transmitter which is a computationally powerful party and a receiver with limited computational capabilities. The proposed encryption technique combines traditional stream ciphering and simulation of a binary channel which degrades channel input by inserting random bits. A statistical model of the proposed encryption is analyzed from the information-theoretic point of view. In the addressed model an attacker faces the problem implied by observing the messages through a channel with random bits insertion. The paper points out a number of security related implications of the considered channel. These implications have been addressed by estimation of the mutual information between the channel input and output and estimation of the number of candidate channel inputs for a given channel output. It is shown that deliberate and secret key controlled insertion of random bits into the basic ciphertext provides security enhancement of the resulting encryption scheme.

  18. Equilibrium thermodynamics - Callen's postulational approach

    NARCIS (Netherlands)

    Jongschaap, R.J.J.; Öttinger, Hans Christian


    In order to provide the background for nonequilibrium thermodynamics, we outline the fundamentals of equilibrium thermodynamics. Equilibrium thermodynamics must not only be obtained as a special case of any acceptable nonequilibrium generalization but, through its shining example, it also elucidates

  19. Land use classification utilizing remote multispectral scanner data and computer analysis techniques (United States)

    Leblanc, P. N.; Johannsen, C. J.; Yanner, J. E.


    An airborne multispectral scanner was used to collect the visible and reflective infrared data. A small subdivision near Lafayette, Indiana was selected as the test site for the urban land use study. Multispectral scanner data were collected over the subdivision on May 1, 1970 from an altitude of 915 meters. The data were collected in twelve wavelength bands from 0.40 to 1.00 micrometers by the scanner. The results indicated that computer analysis of multispectral data can be very accurate in classifying and estimating the natural and man-made materials that characterize land uses in an urban scene.

  20. Computer simulation techniques for acoustical design of rooms - How to treat reflections in sound field simulation

    DEFF Research Database (Denmark)

    Rindel, Jens Holger


    to finite size of surfaces, scattering due to rough structure of surfaces, angle dependent absorption, phase shift at reflection, surfaces divided into reflecting and absorbing parts, partly transparent surfaces, reflector arrays. Due to the wave nature of sound it is very important for the quality......The paper presents a number of problems related to sound reflections, and possible solutions or approximations that can be used in computer models. The problems include: specular and diffuse reflections, early and late reflections, curved surfaces, convex and concave surfaces, diffraction due...

  1. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.


    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  2. The production of the AGARD multilingual aeronautical dictionary using computer techniques (United States)

    Wente, V. A.; Kirschbaum, J. C.; Kuney, J. H.


    The AGARD Multilingual Aeronautical Dictionary (MAD) contained 7,300 technical terms defined in English but also translated into nine other languages. The preparation work was performed by some 250 scientists and engineers who were members of AGARD and involved the translation skills of staff in many of the NATO nations. Nearly all the compilation and setting work for the book was done by computer and automatic photo-composition. The purpose of this publication is to record how the task was approached in terms of management planning.

  3. Optimal reliability allocation for large software projects through soft computing techniques

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin


    Software architecture is considered as a critical design methodology for the development of complex software. As an important step in software quality assurance, the optimal reliability allocation for software projects can be obtained by minimizing the total cost of achieving the target reliability....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  4. Comparative Evaluation of Computational Techniques For Estimating UAV Aerodynamic, Stability and Control Derivatives (United States)

    Ritchie, Robert W.

    The small UAV presents many advantages in size, cost, and portability. Because of this it is often desired to design many UAVs, one for each task, and often in a competitive environment. The designer needs a set of validated conceptual design tools that allow for rapid prediction of flight characteristics. The validation study will be conducted on the Ultra Stick 120 using a comparison of a Vortex Lattice method and Digital DATCOM to wind tunnel data. These computational tools are provided as a part of the CEASIOM software suite.

  5. Mesh generation and computational modeling techniques for bioimpedance measurements: an example using the VHP data (United States)

    Danilov, A. A.; Salamatova, V. Yu; Vassilevski, Yu V.


    Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.

  6. Computation of corona effects in transmission lines using state-space techniques

    Energy Technology Data Exchange (ETDEWEB)

    Herdem, Saadetdin [Nigde Univ., Dept. of Electrical and Electronics Engineering, Malatya (Turkey); Mamis, M. Salih [Inonu Univ., Dept. of Electrical and Electronics Engineering, Malatya (Turkey)


    In this paper, state-space method is applied to compute transients in power transmission lines by considering corona effects. Transmission line is modeled by lumped parameter identical sections to simulate the distribution nature of the line and nonlinear corona branches are combined with these sections. The whole system is composed of RLC elements, sources and switches. The response of the system is calculated using state-space method which has been developed for the analysis of nonlinear power electronic circuits with periodically operated switches. (Author)

  7. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration. (United States)

    Lee, M-Y; Chang, C-C; Ku, Y C


    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  8. Average inbreeding or equilibrium inbreeding?


    Hedrick, P. W.


    The equilibrium inbreeding is always higher than the average inbreeding. For human populations with high inbreeding levels, the inbreeding equilibrium is more than 25% higher than the average inbreeding. Assuming no initial inbreeding in the population, the equilibrium inbreeding value is closely approached in 10 generations or less. A secondary effect of this higher inbreeding level is that the equilibrium frequency of recessive detrimental alleles is somewhat lower than expected using avera...

  9. Module description of TOKAMAK equilibrium code MEUDAS

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment


    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  10. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system (United States)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff


    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  11. A rational approach towards development of amorphous solid dispersions: Experimental and computational techniques. (United States)

    Chakravarty, Paroma; Lubach, Joseph W; Hau, Jonathan; Nagapudi, Karthik


    The purpose of this study was to determine the drug-polymer miscibility of GENE-A, a Genentech molecule, and hydroxypropyl methylcellulose-acetate succinate (HPMC-AS), a polymer, using computational and experimental approaches. The Flory-Huggins interaction parameter,χ, was obtained by calculating the solubility parameters for GENE-A and HPMC-AS over the temperature range of 25-100°C to obtain the free energy of mixing at different drug loadings (0-100%) using the Materials Studio modeling and simulation platform (thermodynamic approach). Solid-state nuclear magnetic spectroscopy (ssNMR) was used to measure the proton relaxation times for both drug and polymer at different drug loadings (up to 60%) at RT (kinetic approach). Thermodynamically, the drug and polymer were predicted to show favorable mixing as indicated by a negative Gibbs free energy of mixing from 25 to 100°C. ssNMR showed near identical relaxation times for both drug and polymer in the solid dispersion at RT and 40°C for a period up to 6 months showing phase mixing between the API and polymer on <10nm scale. Orthogonal computational and experimental approaches indicate phase mixing of the system components. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Computer Animation Complete All-in-One; Learn Motion Capture, Characteristic, Point-Based, and Maya Winning Techniques

    CERN Document Server

    Parent, Rick


    A compilation of key chapters from the top MK computer animation books available today - in the areas of motion capture, facial features, solid spaces, fluids, gases, biology, point-based graphics, and Maya. The chapters provide CG Animators with an excellent sampling of essential techniques that every 3D artist needs to create stunning and versatile images. Animators will be able to master myriad modeling, rendering, and texturing procedures with advice from MK's best and brightest authors. Learn hundreds of tips, tricks, shortcuts and more - all within the covers of one complete, inspiring r

  13. Assessing Reception and Individual Responses to Media Messages: Relevance, Computer Techniques, Quality and Analysis of Real-Time Response Measurements

    Directory of Open Access Journals (Sweden)

    Jürgen MAIER


    Full Text Available Media messages have an important impact on political attitudes and behavior. Hence, the question arises - how is media content received at an individual level? This question is difficult to investigate with the social science research methods currently available. Measuring an individual’s real-time reaction to auditory or visual media content using computers, i.e. real-time response (RTR measurement, can be a fruitful approach to address the question. This article describes various RTR techniques, discusses the reliability and validity of RTR measurements and outlines strategies on how to analyze RTR data.

  14. Percutaneous Direct Repair of a Pars Defect Using Intraoperative Computed Tomography Scan: A Modification of the Buck Technique. (United States)

    Nourbakhsh, Ali; Preuss, Fletcher; Hadeed, Michael; Shimer, Adam


    Case report. To describe a young adult with a pars defect undergoing percutaneous direct fixation using intraoperative computed tomography (CT) scan. Direct pars repair has been utilized since the 1960s. There are no reports in the literature describing a percutaneous technique. Using a percutaneous technique under the guide of intraoperative CT scan, a cannulated partially threaded screw was inserted across the pars defect. Surgery was completed without complication and the patient returned to preoperative activity level 3 months post-op. Postoperative CT scan showed a well-healed L4 pars defect. Percutaneous direct pars repair using intraoperative CT scan offers the advantage of minimal soft tissue dissection, thereby reducing blood loss, infection risk, and recovery time. 5.


    Directory of Open Access Journals (Sweden)

    Ciprian-Costel, MUNTEANU


    Full Text Available In the 21st century, one of the most efficient ways to achieve an independent audit and quality opinion is by using information from the organization database, mainly documents in electronic format. With the help of Computer-Assisted Audit Techniques (CAAT, the financial auditor analyzes part or even all the data about a company in reference to other information within or outside the entity. The main purpose of this paper is to show the benefits of evolving from traditional audit techniques and tools to modern and , why not, visionary CAAT, which are supported by business intelligence systems. Given the opportunity to perform their work in IT environments, the auditors would start using the tools of business intelligence, a key factor which contributes to making successful business decisions . CAAT enable auditors to test large amount of data quickly and accurately and therefore increase the confidence they have in their opinion.

  16. Extended Mixed Vector Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Mijanur Rahaman


    Full Text Available We study extended mixed vector equilibrium problems, namely, extended weak mixed vector equilibrium problem and extended strong mixed vector equilibrium problem in Hausdorff topological vector spaces. Using generalized KKM-Fan theorem (Ben-El-Mechaiekh et al.; 2005, some existence results for both problems are proved in noncompact domain.

  17. Dosimetry in translation total body irradiation technique: a computer treatment planning approach and an experimental study concerning lung sparing. (United States)

    Zabatis, Ch; Koligliatis, Th; Xenofos, St; Pistevou, K; Psarakos, K; Haritanti, A; Beroukas, K


    To describe and evaluate a method that uses a 3-dimensional (3D) treatment planning system (TPS) to determine the relative dose to the lung, and to study the beam filtration required for lung sparing in translation total body irradiation (TBI). Special dosimetric problems related to moving couch were also considered. The irradiation technique employed in our hospital is that of patient translation. The patient is positioned on a moving couch passing under a stationary Co-60 beam so that his/her entire body is irradiated. Measurements of basic data at source-skin distance (SSD)=150 cm were used to implement the Co-60 TBI unit to TPS (THERAPLAN plus), which was then used in dose computations. Two stationary, opposed anterior-posterior (40 x 40 cm) fields were employed to irradiate the Alderson phantom. The midline dose to either lung was computed and correction factors (CFs) were obtained that depend on the anatomy and densities of the tissues involved. These factors give the midline lung dose increase relative to the midline dose at the level of the mediastinum. Once the required lung dose was decided, the computed CF was used to estimate the filtration required from the measured broad beam attenuation data. The shielded lung dose distribution could be obtained from the TPS using a transmission corresponding to narrow beam geometry. To verify the TPS computations, measurements using a dosimeter and a diode system were carried out, employing solid water phantoms and the Alderson phantom. For the TPS employed, the computed midline CFs were lower than those measured in simple geometry phantoms for lung densities of 0.2-0.35 g/cm(3), by no more than 2%. For the Alderson phantom studied (lung density of 0.32 g/cm(3)), the computed CF was 1.11, which was 2% higher than the measured value. The advantages of a 3D TPS (dose distribution inside the lung, lung dose volume histograms [DVH], accurate attenuator shape from patient's anatomy etc.) allowed to study the lung dose in

  18. Assessment of three root canal preparation techniques on root canal geometry using micro-computed tomography: In vitro study

    Directory of Open Access Journals (Sweden)

    Shaikha M Al-Ali


    Full Text Available Aim: To assess the effects of three root canal preparation techniques on canal volume and surface area using three-dimensionally reconstructed root canals in extracted human maxillary molars. Materials and Methods: Thirty extracted Human Maxillary Molars having three separate roots and similar root shape were randomly selected from a pool of extracted teeth for this study and stored in normal saline solution until used. A computed tomography scanner (Philips Brilliance CT 64-slice was used to analyze root canals in extracted maxillary molars. Specimens were scanned before and after canals were prepared using stainless steel K-Files, Ni-Ti rotary ProTaper and rotary SafeSiders instruments. Differences in dentin volume removed, the surface area, the proportion of unchanged area and canal transportation were calculated using specially developed software. Results: Instrumentation of canals increased volume and surface area. Statistical analysis found a statistically significant difference among the 3 groups in total change in volume (P = 0.001 and total change in surface area (P = 0.13. Significant differences were found when testing both groups with group III (SafeSiders. Significant differences in change of volume were noted when grouping was made with respect to canal type (in MB and DB (P < 0.05. Conclusion: The current study used computed tomography, an innovative and non destructive technique, to illustrate changes in canal geometry. Overall, there were few statistically significant differences between the three instrumentation techniques used. SafeSiders stainless steel 40/0.02 instruments exhibit a greater cutting efficiency on dentin than K-Files and ProTaper. CT is a new and valuable tool to study root canal geometry and changes after preparation in great details. Further studies with 3D-techniques are required to fully understand the biomechanical aspects of root canal preparation.

  19. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.


    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  20. Finite element solution techniques for large-scale problems in computational fluid dynamics (United States)

    Liou, J.; Tezduyar, T. E.


    Element-by-element approximate factorization, implicit-explicit and adaptive implicit-explicit approximation procedures are presented for the finite-element formulations of large-scale fluid dynamics problems. The element-by-element approximation scheme totally eliminates the need for formation, storage and inversion of large global matrices. Implicit-explicit schemes, which are approximations to implicit schemes, substantially reduce the computational burden associated with large global matrices. In the adaptive implicit-explicit scheme, the implicit elements are selected dynamically based on element level stability and accuracy considerations. This scheme provides implicit refinement where it is needed. The methods are applied to various problems governed by the convection-diffusion and incompressible Navier-Stokes equations. In all cases studied, the results obtained are indistinguishable from those obtained by the implicit formulations.

  1. Computational techniques and data structures of the sparse underdetermined systems with using graph theory (United States)

    Pilipchuk, L. A.; Pilipchuk, A. S.


    For constructing of the solutions of the sparse linear systems we propose effective methods, technologies and their implementation in Wolfram Mathematica. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure. In addition, such systems arise in estimating traffic in the generalized graph or multigraph on its unobservable part. For computing of each vector of the basis solution space with linear estimate in the worst case we propose effective algorithms and data structures in the case when a support of the multigraph or graph for the sparse systems contains a cycles.

  2. Application of Assistive Computer Vision Methods to Oyama Karate Techniques Recognition

    Directory of Open Access Journals (Sweden)

    Tomasz Hachaj


    Full Text Available In this paper we propose a novel algorithm that enables online actions segmentation and classification. The algorithm enables segmentation from an incoming motion capture (MoCap data stream, sport (or karate movement sequences that are later processed by classification algorithm. The segmentation is based on Gesture Description Language classifier that is trained with an unsupervised learning algorithm. The classification is performed by continuous density forward-only hidden Markov models (HMM classifier. Our methodology was evaluated on a unique dataset consisting of MoCap recordings of six Oyama karate martial artists including multiple champion of Kumite Knockdown Oyama karate. The dataset consists of 10 classes of actions and included dynamic actions of stands, kicks and blocking techniques. Total number of samples was 1236. We have examined several HMM classifiers with various number of hidden states and also Gaussian mixture model (GMM classifier to empirically find the best setup of the proposed method in our dataset. We have used leave-one-out cross validation. The recognition rate of our methodology differs between karate techniques and is in the range of 81% ± 15% even to 100%. Our method is not limited for this class of actions but can be easily adapted to any other MoCap-based actions. The description of our approach and its evaluation are the main contributions of this paper. The results presented in this paper are effects of pioneering research on online karate action classification.

  3. Error Detection and Recovery Techniques for Variation-Aware CMOS Computing: A Comprehensive Review

    Directory of Open Access Journals (Sweden)

    Joseph Crop


    Full Text Available While Moore’s law scaling continues to double transistor density every technology generation, new design challenges are introduced. One of these challenges is variation, resulting in deviations in the behavior of transistors, most importantly in switching delays. These exaggerated delays widen the gap between the average and the worst case behavior of a circuit. Conventionally, circuits are designed to accommodate the worst case delay and are therefore becoming very limited in their performance advantages. Thus, allowing for an average case oriented design is a promising solution, maintaining the pace of performance improvement over future generations. However, to maintain correctness, such an approach will require on the fly mechanisms to prevent, detect, and resolve violations. This paper explores such mechanisms, allowing the improvement of circuit performance under intensifying variations. We present speculative error detection techniques along with recovery mechanisms. We continue by discussing their ability to operate under extreme variations including sub-threshold operation. While the main focus of this survey is on circuit approaches, for its completeness, we discuss higher-level, architectural and algorithmic techniques as well.

  4. Evaluation of computer imaging technique for predicting the SPAD readings in potato leaves

    Directory of Open Access Journals (Sweden)

    M.S. Borhan


    Full Text Available Facilitating non-contact measurement, a computer-imaging system was devised and evaluated to predict the chlorophyll content in potato leaves. A charge-coupled device (CCD camera paired with two optical filters and light chamber was used to acquire green (550 ± 40 nm and red band (700 ± 40 nm images from the same leaf. Potato leaves from 15 plants differing in coloration (green to yellow and age were selected for this study. Histogram based image features, such as mean and variances of green and red band images, were extracted from the histogram. Regression analyses demonstrated that the variations in SPAD meter reading could be explained by the mean gray and variances of gray scale values. The fitted least square models based on the mean gray scale levels were inversely related to the chlorophyll content of the potato leaf with a R2 of 0.87 using a green band image and with an R2 of 0.79 using a red band image. With the extracted four image features, the developed multiple linear regression model predicted the chlorophyll content with a high R2 of 0.88. The multiple regression model (using all features provided an average prediction accuracy of 85.08% and a maximum accuracy of 99.8%. The prediction model using only mean gray value of red band showed an average accuracy of 81.6% with a maximum accuracy of 99.14%. Keywords: Computer imaging, Chlorophyll, SPAD meter, Regression, Prediction accuracy

  5. A novel potential/viscous flow coupling technique for computing helicopter flow fields (United States)

    Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul


    The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.

  6. Radiation Dose Reduction in Computed Tomography-Guided Lung Interventions using an Iterative Reconstruction Technique. (United States)

    Chang, D H; Hiss, S; Mueller, D; Hellmich, M; Borggrefe, J; Bunck, A C; Maintz, D; Hackenbroch, M


    To compare the radiation doses and image qualities of computed tomography (CT)-guided interventions using a standard-dose CT (SDCT) protocol with filtered back projection and a low-dose CT (LDCT) protocol with both filtered back projection and iterative reconstruction. Image quality and radiation doses (dose-length product and CT dose index) were retrospectively reviewed for 130 patients who underwent CT-guided lung interventions. SDCT at 120 kVp and automatic mA modulation and LDCT at 100 kVp and a fixed exposure were each performed for 65 patients. Image quality was objectively evaluated as the contrast-to-noise ratio and subjectively by two radiologists for noise impression, sharpness, artifacts and diagnostic acceptability on a four-point scale. The groups did not significantly differ in terms of diagnostic acceptability and complication rate. LDCT yielded a median 68.6% reduction in the radiation dose relative to SDCT. In the LDCT group, iterative reconstruction was superior to filtered back projection in terms of noise reduction and subjective image quality. The groups did not differ in terms of beam hardening artifacts. LDCT was feasible for all procedures and yielded a more than two-thirds reduction in radiation exposure while maintaining overall diagnostic acceptability, safety and precision. The iterative reconstruction algorithm is preferable according to the objective and subjective image quality analyses. Implementation of a low-dose computed tomography (LDCT) protocol for lung interventions is feasible and safe. LDCT protocols yield a significant reduction (more than 2/3) in radiation exposure. Iterative reconstruction algorithms considerably improve the image quality in LDCT protocols. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Non-equilibrium thermodynamics

    CERN Document Server

    De Groot, Sybren Ruurds


    The study of thermodynamics is especially timely today, as its concepts are being applied to problems in biology, biochemistry, electrochemistry, and engineering. This book treats irreversible processes and phenomena - non-equilibrium thermodynamics.S. R. de Groot and P. Mazur, Professors of Theoretical Physics, present a comprehensive and insightful survey of the foundations of the field, providing the only complete discussion of the fluctuating linear theory of irreversible thermodynamics. The application covers a wide range of topics: the theory of diffusion and heat conduction, fluid dyn

  8. General equilibrium without utility functions

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich


    How far can we go in weakening the assumptions of the general equilibrium model? Existence of equilibrium, structural stability and finiteness of equilibria of regular economies, genericity of regular economies and an index formula for the equilibria of regular economies have been known...... and the diffeomorphism of the equilibrium manifold with a Euclidean space; (2) the diffeomorphism of the set of no-trade equilibria with a Euclidean space; (3) the openness and genericity of the set of regular equilibria as a subset of the equilibrium manifold; (4) for small trade vectors, the uniqueness, regularity...... and stability of equilibrium for two version of tatonnement; (5) the pathconnectedness of the sets of stable equilibria....

  9. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report (United States)


    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  10. Bypassing absorbing objects in focused ultrasound using computer generated holographic technique. (United States)

    Hertzberg, Y; Navon, G


    Focused ultrasound (FUS) technology is based on heating a small volume of tissue, while keeping the temperature outside the focus region with minimal heating only. Several FUS applications, such as brain and liver, suffer from the existence of ultrasound absorbers in the acoustic path between the transducer and the focus. These absorbers are a potential risk for the FUS therapy since they might cause to unwanted heating outside the focus region. An acoustic simulation based solution for reducing absorbers' heating is proposed, demonstrated, and compared to the standard geometrical solution. The proposed solution uses 3D continuous acoustic holograms, generated by the Gerchberg-Saxton (GS) algorithm, which are described and demonstrated for the first time using ultrasound planar phased-array transducer. Holograms were generated using the iterative GS algorithm and fast Fourier transform (FFT) acoustic simulation. The performances of the holograms are demonstrated by temperature elevation images of the absorber, acquired by GE 1.5T MRI scanner equipped with InSightec FUS planar phased-array transducer built out of 986 transmitting elements. The acoustic holographic technology is demonstrated numerically and experimentally using the three letters patterns, "T," "A," and "U," which were manually built into 1 × 1 cm masks to represent the requested target fields. 3D holograms of a focused ultrasound field with a hole in intensity at the absorber region were generated and compared to the standard geometrical solution. The proposed holographic solution results in 76% reduction of heating on absorber, while keeping similar heating at the focus. In the present work we show for the first time the generation of efficient and uniform continuous ultrasound holograms in 3D. We use the holographic technology to generate a FUS beams that bypasses an absorber in the acoustic path to reduce unnecessary heating and potential clinical risk. The developed technique is superior

  11. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations. (United States)

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R


    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be

  12. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  13. An Efficient Statistical Computation Technique for Health Care Big Data using R (United States)

    Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.


    Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.

  14. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    J. Rodolfo Maestre-Rendon


    Full Text Available Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920 connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  15. Water resources climate change projections using supervised nonlinear and multivariate soft computing techniques (United States)

    Sarhadi, Ali; Burn, Donald H.; Johnson, Fiona; Mehrotra, Raj; Sharma, Ashish


    Accurate projection of global warming on the probabilistic behavior of hydro-climate variables is one of the main challenges in climate change impact assessment studies. Due to the complexity of climate-associated processes, different sources of uncertainty influence the projected behavior of hydro-climate variables in regression-based statistical downscaling procedures. The current study presents a comprehensive methodology to improve the predictive power of the procedure to provide improved projections. It does this by minimizing the uncertainty sources arising from the high-dimensionality of atmospheric predictors, the complex and nonlinear relationships between hydro-climate predictands and atmospheric predictors, as well as the biases that exist in climate model simulations. To address the impact of the high dimensional feature spaces, a supervised nonlinear dimensionality reduction algorithm is presented that is able to capture the nonlinear variability among projectors through extracting a sequence of principal components that have maximal dependency with the target hydro-climate variables. Two soft-computing nonlinear machine-learning methods, Support Vector Regression (SVR) and Relevance Vector Machine (RVM), are engaged to capture the nonlinear relationships between predictand and atmospheric predictors. To correct the spatial and temporal biases over multiple time scales in the GCM predictands, the Multivariate Recursive Nesting Bias Correction (MRNBC) approach is used. The results demonstrate that this combined approach significantly improves the downscaling procedure in terms of precipitation projection.

  16. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. (United States)

    Yassin, Nisreen I R; Omran, Shaimaa; El Houby, Enas M F; Allam, Hemat


    The high incidence of breast cancer in women has increased significantly in the recent years. Physician experience of diagnosing and detecting breast cancer can be assisted by using some computerized features extraction and classification algorithms. This paper presents the conduction and results of a systematic review (SR) that aims to investigate the state of the art regarding the computer aided diagnosis/detection (CAD) systems for breast cancer. The SR was conducted using a comprehensive selection of scientific databases as reference sources, allowing access to diverse publications in the field. The scientific databases used are Springer Link (SL), Science Direct (SD), IEEE Xplore Digital Library, and PubMed. Inclusion and exclusion criteria were defined and applied to each retrieved work to select those of interest. From 320 studies retrieved, 154 studies were included. However, the scope of this research is limited to scientific and academic works and excludes commercial interests. This survey provides a general analysis of the current status of CAD systems according to the used image modalities and the machine learning based classifiers. Potential research studies have been discussed to create a more objective and efficient CAD systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Gypsum plasterboards enhanced with phase change materials: A fire safety assessment using experimental and computational techniques

    Directory of Open Access Journals (Sweden)

    Kolaitis Dionysios I.


    Full Text Available Phase Change Materials (PCM can be used for thermal energy storage, aiming to enhance building energy efficiency. Recently, gypsum plasterboards with incorporated paraffin-based PCM blends have become commercially available. In the high temperature environment developed during a fire, the paraffins, which exhibit relatively low boiling points, may evaporate and, escaping through the gypsum plasterboard's porous structure, emerge to the fire region, where they may ignite, thus adversely affecting the fire resistance characteristics of the building. Aiming to assess the fire safety behaviour of such building materials, an extensive experimental and computational analysis is performed. The fire behaviour and the main thermo-physical physical properties of PCM-enhanced gypsum plasterboards are investigated, using a variety of standard tests and devices (Scanning Electron Microscopy, Thermo Gravimetric Analysis, Cone Calorimeter. The obtained results are used to develop a dedicated numerical model, which is implemented in a CFD code. CFD simulations are validated using measurements obtained in a cone calorimeter. In addition, the CFD code is used to simulate an ISO 9705 room exposed to fire conditions, demonstrating that PCM addition may indeed adversely affect the fire safety of a gypsum plasterboard clad building.

  18. Iterative reconstruction techniques for computed tomography part 2: initial results in dose reduction and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Leiner, Tim; Jong, Pim A. de; Nievelstein, Rutger A.J.; Schilham, Arnold M.R. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Heer, Linda M. de [Cardiothoracic Surgery, Utrecht (Netherlands); Budde, Ricardo P.J. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Gelre Hospital, Department of Radiology, Apeldoorn (Netherlands)


    To present the results of a systematic literature search aimed at determining to what extent the radiation dose can be reduced with iterative reconstruction (IR) for cardiopulmonary and body imaging with computed tomography (CT) in the clinical setting and what the effects on image quality are with IR versus filtered back-projection (FBP) and to provide recommendations for future research on IR. We searched Medline and Embase from January 2006 to January 2012 and included original research papers concerning IR for CT. The systematic search yielded 380 articles. Forty-nine relevant studies were included. These studies concerned: the chest(n = 26), abdomen(n = 16), both chest and abdomen(n = 1), head(n = 4), spine(n = 1), and no specific area (n = 1). IR reduced noise and artefacts, and it improved subjective and objective image quality compared to FBP at the same dose. Conversely, low-dose IR and normal-dose FBP showed similar noise, artefacts, and subjective and objective image quality. Reported dose reductions ranged from 23 to 76 % compared to locally used default FBP settings. However, IR has not yet been investigated for ultra-low-dose acquisitions with clinical diagnosis and accuracy as endpoints. Benefits of IR include improved subjective and objective image quality as well as radiation dose reduction while preserving image quality. Future studies need to address the value of IR in ultra-low-dose CT with clinically relevant endpoints. (orig.)

  19. Predicting the accuracy of multiple sequence alignment algorithms by using computational intelligent techniques. (United States)

    Ortuño, Francisco M; Valenzuela, Olga; Pomares, Hector; Rojas, Fernando; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio


    Multiple sequence alignments (MSAs) have become one of the most studied approaches in bioinformatics to perform other outstanding tasks such as structure prediction, biological function analysis or next-generation sequencing. However, current MSA algorithms do not always provide consistent solutions, since alignments become increasingly difficult when dealing with low similarity sequences. As widely known, these algorithms directly depend on specific features of the sequences, causing relevant influence on the alignment accuracy. Many MSA tools have been recently designed but it is not possible to know in advance which one is the most suitable for a particular set of sequences. In this work, we analyze some of the most used algorithms presented in the bibliography and their dependences on several features. A novel intelligent algorithm based on least square support vector machine is then developed to predict how accurate each alignment could be, depending on its analyzed features. This algorithm is performed with a dataset of 2180 MSAs. The proposed system first estimates the accuracy of possible alignments. The most promising methodologies are then selected in order to align each set of sequences. Since only one selected algorithm is run, the computational time is not excessively increased.

  20. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques. (United States)

    Maestre-Rendon, J Rodolfo; Rivera-Roman, Tomas A; Sierra-Hernandez, Juan M; Cruz-Aceves, Ivan; Contreras-Medina, Luis M; Duarte-Galvan, Carlos; Fernandez-Jaramillo, Arturo A


    Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  1. Expiratory computed tomographic techniques: a cause of a poor rate of change in lung volume. (United States)

    Morikawa, Keiko; Okada, Fumito; Mori, Hiromu


    Ninety-nine patients (29 males and 70 females; mean age, 57.1 years; range, 22-81 years) were included in this study to evaluate the factors affecting smaller lung volume changes in expiratory high-resolution computed tomography performed to depict air trapping. All patients underwent inspiratory and expiratory chest thin-section CT examinations and pulmonary function tests. Air trapping on CT images was graded subjectively. All variables (age, sex, diagnosis, pulmonary function index, and air trapping score) were compared with the degree of change in lung volume between the inspiratory and expiratory CT examinations. The variables affecting a lower degree of volume change were vital capacity, forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1.0), and the FEV1.0/FVC ratio. Bronchiolitis obliterans was the dominant diagnosis in patients with insufficient degrees of breath holding and in patients with negative air trapping scores despite an abnormal air trapping index. An insufficient degree of lung changes between inspiration and expiration on CT examinations represented bronchiolitis obliterans, which resulted in low FEV1.0 and FEV1.0/FVC values. Changes in the time gap from the announcement of exhalation and breath holding to the start of scanning most effectively indicated air trapping in patients with bronchiolar disorders.

  2. Computation of currents induced by ELF electric fields in anisotropic human tissues using the Finite Integration Technique (FIT

    Directory of Open Access Journals (Sweden)

    V. C. Motrescu


    Full Text Available In the recent years, the task of estimating the currents induced within the human body by environmental electromagnetic fields has received increased attention from scientists around the world. While important progress was made in this direction, the unpredictable behaviour of living biological tissue made it difficult to quantify its reaction to electromagnetic fields and has kept the problem open. A successful alternative to the very difficult one of performing measurements is that of computing the fields within a human body model using numerical methods implemented in a software code. One of the difficulties is represented by the fact that some tissue types exhibit an anisotropic character with respect to their dielectric properties. Our work consists of computing currents induced by extremely low frequency (ELF electric fields in anisotropic muscle tissues using in this respect, a human body model extended with muscle fibre orientations as well as an extended version of the Finite Integration Technique (FIT able to compute fully anisotropic dielectric properties.

  3. The automated design of materials far from equilibrium (United States)

    Miskin, Marc Z.

    Automated design is emerging as a powerful concept in materials science. By combining computer algorithms, simulations, and experimental data, new techniques are being developed that start with high level functional requirements and identify the ideal materials that achieve them. This represents a radically different picture of how materials become functional in which technological demand drives material discovery, rather than the other way around. At the frontiers of this field, materials systems previously considered too complicated can start to be controlled and understood. Particularly promising are materials far from equilibrium. Material robustness, high strength, self-healing and memory are properties displayed by several materials systems that are intrinsically out of equilibrium. These and other properties could be revolutionary, provided they can first be controlled. This thesis conceptualizes and implements a framework for designing materials that are far from equilibrium. We show how, even in the absence of a complete physical theory, design from the top down is possible and lends itself to producing physical insight. As a prototype system, we work with granular materials: collections of athermal, macroscopic identical objects, since these materials function both as an essential component of industrial processes as well as a model system for many non-equilibrium states of matter. We show that by placing granular materials in the context of design, benefits emerge simultaneously for fundamental and applied interests. As first steps, we use our framework to design granular aggregates with extreme properties like high stiffness, and softness. We demonstrate control over nonlinear effects by producing exotic aggregates that stiffen under compression. Expanding on our framework, we conceptualize new ways of thinking about material design when automatic discovery is possible. We show how to build rules that link particle shapes to arbitrary granular packing

  4. Original Protocol Using Computed Tomographic Angiography for Diagnosis of Brain Death: A Better Alternative to Standard Two-Phase Technique? (United States)

    Sawicki, Marcin; Sołek-Pastuszka, Joanna; Jurczyk, Krzysztof; Skrzywanek, Piotr; Guziński, Maciej; Czajkowski, Zenon; Mańko, Witold; Burzyńska, Małgorzata; Safranow, Krzysztof; Poncyljusz, Wojciech; Walecka, Anna; Rowiński, Olgierd; Walecki, Jerzy; Bohatyrewicz, Romuald


    BACKGROUND The application of computed tomographic angiography (CTA) for the diagnosis of brain death (BD) is limited because of the low sensitivity of the commonly used two-phase method consisting of assessing arterial and venous opacification at the 60th second after contrast injection. The hypothesis was that a reduction in the scanning delay might increase the sensitivity of the test. Therefore, an original technique using CTA was introduced and compared with catheter angiography as a reference. MATERIAL AND METHODS In a prospective multicenter trial, 84 clinically brain-dead patients were examined using CTA and catheter angiography. The sensitivities of original CTA technique, involving an arterial assessment at the 25th second and a venous assessment at the 40th second, and the standard CTA, involving an arterial and venous assessment at the 60th second, were compared to catheter angiography. RESULTS Catheter angiography results were consistent with the clinical diagnosis of BD in all cases. In comparison to catheter angiography, the sensitivity of original CTA technique was 0.93 (95%CI, 0.85-0.97; p<0.001) and 0.57 (95%CI, 0.46-0.68; p<0.001) for the standard protocol. The differences were statistically significant (p=0.03 for original CTA and p<0.001 for standard CTA). Decompressive craniectomy predisposes to a false-negative CTA result with a relative risk of 3.29 (95% CI, 1.76-5.81; p<0.001). CONCLUSIONS Our original technique using CTA for the assessment of the cerebral arteries during the arterial phase and the deep cerebral veins with a delay of 15 seconds is a highly sensitive test for the diagnosis of BD. This method may be a better alternative to the commonly used technique.

  5. Thermal equilibrium of goats. (United States)

    Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G


    The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Computer-assisted preoperative simulation for positioning and fixation of plate in 2-stage procedure combining maxillary advancement by distraction technique and mandibular setback surgery

    Directory of Open Access Journals (Sweden)

    Hideyuki Suenaga


    Conclusion: The implementation of the computer-assisted preoperative simulation for the positioning and fixation of plate in 2-stage orthognathic procedure using distraction technique and mandibular setback surgery yielded good results.

  7. Optimization of the Production of Inactivated Clostridium novyi Type B Vaccine Using Computational Intelligence Techniques. (United States)

    Aquino, P L M; Fonseca, F S; Mozzer, O D; Giordano, R C; Sousa, R


    Clostridium novyi causes necrotic hepatitis in sheep and cattle, as well as gas gangrene. The microorganism is strictly anaerobic, fastidious, and difficult to cultivate in industrial scale. C. novyi type B produces alpha and beta toxins, with the alpha toxin being linked to the presence of specific bacteriophages. The main strategy to combat diseases caused by C. novyi is vaccination, employing vaccines produced with toxoids or with toxoids and bacterins. In order to identify culture medium components and concentrations that maximized cell density and alpha toxin production, a neuro-fuzzy algorithm was applied to predict the yields of the fermentation process for production of C. novyi type B, within a global search procedure using the simulated annealing technique. Maximizing cell density and toxin production is a multi-objective optimization problem and could be treated by a Pareto approach. Nevertheless, the approach chosen here was a step-by-step one. The optimum values obtained with this approach were validated in laboratory scale, and the results were used to reload the data matrix for re-parameterization of the neuro-fuzzy model, which was implemented for a final optimization step with regards to the alpha toxin productivity. With this methodology, a threefold increase of alpha toxin could be achieved.

  8. Using combined computational techniques to predict the glass transition temperatures of aromatic polybenzoxazines.

    Directory of Open Access Journals (Sweden)

    Phumzile Mhlanga

    Full Text Available The Molecular Operating Environment software (MOE is used to construct a series of benzoxazine monomers for which a variety of parameters relating to the structures (e.g. water accessible surface area, negative van der Waals surface area, hydrophobic volume and the sum of atomic polarizabilities, etc. are obtained and quantitative structure property relationships (QSPR models are formulated. Three QSPR models (formulated using up to 5 descriptors are first used to make predictions for the initiator data set (n = 9 and compared to published thermal data; in all of the QSPR models there is a high level of agreement between the actual data and the predicted data (within 0.63-1.86 K of the entire dataset. The water accessible surface area is found to be the most important descriptor in the prediction of T(g. Molecular modelling simulations of the benzoxazine polymer (minus initiator carried out at the same time using the Materials Studio software suite provide an independent prediction of T(g. Predicted T(g values from molecular modelling fall in the middle of the range of the experimentally determined T(g values, indicating that the structure of the network is influenced by the nature of the initiator used. Hence both techniques can provide predictions of glass transition temperatures and provide complementary data for polymer design.

  9. Effect of various veneering techniques on mechanical strength of computer-controlled zirconia framework designs. (United States)

    Kanat, Burcu; Cömlekoğlu, Erhan M; Dündar-Çömlekoğlu, Mine; Hakan Sen, Bilge; Ozcan, Mutlu; Ali Güngör, Mehmet


    The objectives of this study were to evaluate the fracture resistance (FR), flexural strength (FS), and shear bond strength (SBS) of zirconia framework material veneered with different methods and to assess the stress distributions using finite element analysis (FEA). Zirconia frameworks fabricated in the forms of crowns for FR, bars for FS, and disks for SBS (N = 90, n = 10) were veneered with either (a) file splitting (CAD-on) (CD), (b) layering (L), or (c) overpressing (P) methods. For crown specimens, stainless steel dies (N = 30; 1 mm chamfer) were scanned using the labside contrast spray. A bilayered design was produced for CD, whereas a reduced design (1 mm) was used for L and P to support the veneer by computer-aided design and manufacturing. For bar (1.5 × 5 × 25 mm(3) ) and disk (2.5 mm diameter, 2.5 mm height) specimens, zirconia blocks were sectioned under water cooling with a low-speed diamond saw and sintered. To prepare the suprastructures in the appropriate shapes for the three mechanical tests, nano-fluorapatite ceramic was layered and fired for L, fluorapatite-ceramic was pressed for P, and the milled lithium-disilicate ceramics were fused with zirconia by a thixotropic glass ceramic for CD and then sintered for crystallization of veneering ceramic. Crowns were then cemented to the metal dies. All specimens were stored at 37°C, 100% humidity for 48 hours. Mechanical tests were performed, and data were statistically analyzed (ANOVA, Tukey's, α = 0.05). Stereomicroscopy and scanning electron microscopy (SEM) were used to evaluate the failure modes and surface structure. FEA modeling of the crowns was obtained. Mean FR values (N ± SD) of CD (4408 ± 608) and L (4323 ± 462) were higher than P (2507 ± 594) (p veneering ceramic on zirconia with a reduced framework design may reduce ceramic chipping. © 2014 by the American College of Prosthodontists.


    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  11. Thermodynamic equilibrium at heterogeneous pressure (United States)

    Vrijmoed, Johannes C.; Podladchikov, Yuri Y.


    Recent advances in metamorphic petrology point out the importance of grain-scale pressure variations in high-temperature metamorphic rocks. Pressures derived from chemical zonation using unconventional geobarometry based on equal chemical potentials fit mechanically feasible pressure variations. Here a thermodynamic equilibrium method is presented that predicts chemical zoning as a result of pressure variations by Gibbs energy minimization. Equilibrium thermodynamic prediction of the chemical zoning in the case of pressure heterogeneity is done by constraint Gibbs minimization using linear programming techniques. Compositions of phases considered in the calculation are discretized into 'pseudo-compounds' spanning the entire compositional space. Gibbs energies of these discrete compounds are generated for a given range and resolution of pressures for example derived by barometry or from mechanical model predictions. Gibbs energy minimization is subsequently performed considering all compounds of different composition and pressure. In addition to constraining the system composition a certain proportion of the system is constraint at a specified pressure. Input pressure variations need to be discretized and each discrete pressure defines an additional constraint for the minimization. The proportion of the system at each different pressure is equally distributed over the number of input pressures. For example if two input pressures P1 and P2 are specified, two constraints are added: 50 percent of the system is constraint at P1 while the remaining 50 percent is constraint at P2. The method has been tested for a set of 10 input pressures obtained by Tajčmanová et al. (2014) using their unconventional geobarometry method in a plagioclase rim around kyanite. Each input pressure is added as constraint to the minimization (1/10 percent of the system for each discrete pressure). Constraining the system composition to the average composition of the plagioclase rim

  12. Preliminary Clinical Application of Removable Partial Denture Frameworks Fabricated Using Computer-Aided Design and Rapid Prototyping Techniques. (United States)

    Ye, Hongqiang; Ning, Jing; Li, Man; Niu, Li; Yang, Jian; Sun, Yuchun; Zhou, Yongsheng

    The aim of this study was to explore the application of computer-aided design and rapid prototyping (CAD/RP) for removable partial denture (RPD) frameworks and evaluate the fitness of the technique for clinical application. Three-dimensional (3D) images of dentition defects were obtained using a lab scanner. The RPD frameworks were designed using commercial dental software and manufactured using selective laser melting (SLM). A total of 15 cases of RPD prostheses were selected, wherein each patient received two types of RPD frameworks, prepared by CAD/RP and investment casting. Primary evaluation of the CAD/RP framework was performed by visual inspection. The gap between the occlusal rest and the relevant rest seat was then replaced using silicone, and the specimens were observed and measured. Paired t test was used to compare the average thickness and distributed thickness between the CAD/RP and investment casting frameworks. Analysis of variance test was used to compare the difference in thickness among different zones. The RPD framework was designed and directly manufactured using the SLM technique. CAD/RP frameworks may meet the clinical requirements with satisfactory retention and stability and no undesired rotation. Although the average gap between the occlusal rest and the corresponding rest seat of the CAD/RP frameworks was slightly larger than that of the investment casting frameworks (P < .05), it was acceptable for clinical application. RPD frameworks can be designed and fabricated directly using digital techniques with acceptable results in clinical application.

  13. Reliability of the assessment of lower limb torsion using computed tomography: analysis of five different techniques

    Energy Technology Data Exchange (ETDEWEB)

    Liodakis, Emmanouil; Doxastaki, Iosifina; Chu, Kongfai; Krettek, Christian; Gaulke, Ralph; Citak, Musa [Hannover Medical School, Trauma Department, Hannover (Germany); Kenawey, Mohamed [Sohag University Hospital, Orthopaedic Surgery Department, Sohag (Egypt)


    Various methods have been described to define the femoral neck and distal tibial axes based on a single CT image. The most popular are the Hernandez and Weiner methods for defining the femoral neck axis and the Jend, Ulm, and bimalleolar methods for defining the distal tibial axis. The purpose of this study was to calculate the intra- and interobserver reliability of the above methods and to determine intermethod differences. Three physicians separately measured the rotational profile of 44 patients using CT examinations on two different occasions. The average age of patients was 36.3 {+-} 14.4 years, and there were 25 male and 19 female patients. After completing the first two sessions of measurements, one observer chose certain cuts at the levels of the femoral neck, femoral condylar area, tibial plateau, and distal tibia. The three physicians then repeated all measurements using these CT cuts. The greatest interclass correlation coefficients were achieved with the Hernandez (0.99 intra- and 0.93 interobserver correlations) and bimalleolar methods (0.99 intra- and 0.92 interobserver correlations) for measuring the femoral neck and distal tibia axes, respectively. A statistically significant decrease in the interobserver median absolute differences could be achieved through the use of predefined CT scans only for measurements of the femoral condylar axis and the distal tibial axis using the Ulm method. The bimalleolar axis method underestimated the tibial torsion angle by an average of 4.8 and 13 compared to the Ulm and Jend techniques, respectively. The methods with the greatest inter- and intraobserver reliabilities were the Hernandez and bimalleolar methods for measuring femoral anteversion and tibial torsion, respectively. The high intermethod differences make it difficult to compare measurements made with different methods. (orig.)

  14. Enhanced Genetic Algorithm based computation technique for multi-objective Optimal Power Flow solution

    Energy Technology Data Exchange (ETDEWEB)

    Kumari, M. Sailaja; Maheswarapu, Sydulu [Department of Electrical Engineering, National Institute of Technology, Warangal (India)


    Optimal Power Flow (OPF) is used for developing corrective strategies and to perform least cost dispatches. In order to guide the decision making of power system operators a more robust and faster OPF algorithm is needed. OPF can be solved for minimum generation cost, that satisfies the power balance equations and system constraints. But, cost based OPF solutions usually result in unattractive system losses and voltage profiles. In the present paper the OPF problem is formulated as a multi-objective optimization problem, where optimal control settings for simultaneous minimization of fuel cost and loss, loss and voltage stability index, fuel cost and voltage stability index and finally fuel cost, loss and voltage stability index are obtained. The present paper combines a new Decoupled Quadratic Load Flow (DQLF) solution with Enhanced Genetic Algorithm (EGA) to solve the OPF problem. A Strength Pareto Evolutionary Algorithm (SPEA) based approach with strongly dominated set of solutions is used to form the pareto-optimal set. A hierarchical clustering technique is employed to limit the set of trade-off solutions. Finally a fuzzy based approach is used to obtain the optimal solution from the tradeoff curve. The proposed multi-objective evolutionary algorithm with EGA-DQLF model for OPF solution determines diverse pareto optimal front in just 50 generations. IEEE 30 bus system is used to demonstrate the behavior of the proposed approach. The obtained final optimal solution is compared with that obtained using Particle Swarm Optimization (PSO) and Fuzzy satisfaction maximization approach. The results using EGA-DQLF with SPEA approach show their superiority over PSO-Fuzzy approach. (author)

  15. Computer-guided technique evaluation of the bony palate for planning individual implant placement. (United States)

    Cagimni, Pinar; Govsa, Figen; Ozer, Mehmet Asim; Kazak, Zuhal


    Different clinical problems may require a surgical approach to the dental arch, such as dentofacial orthopedics, implant-supported dental prothesis, maxillary orthodontics protraction, removable appliances, and posttraumatic dental reconstruction. The aim of this study is to analyze the dental arch size and type for supporting individual dental protheses. In this study, the reference measurements on the length of the bony palate, maxillary intercanine width, maxillary intermolar width, and the ratio of the maxillary to the palatinal surface were studied in 120 bony palates using a computer software program. The average length of the bony palate, maxilla, and palatine was measured as 104.4 ± 30.3, 40.05 ± 4.05, and 15.00 ± 3.03 mm, respectively. The right and left sides of average width of intermaxillary distances were measured as 13.75 ± 1.50 and 12.51 ± 1.50 mm, respectively. The average width of intermolar distance was calculated as 19.82 ± 1.61 mm (right side) and 18.89 ± 1.69 mm (left side), respectively. The maxillary dentitions were classified as square (17%), round-square (63.5%), round (14.4%), and round V-shaped arches (5.1%). The round-square ones showed no prominent principal component. Among the maxillary arches, the round arches were characterized by small values and round V-shaped ones with the largest values. Asymmetry between the right and the left bony palate was observed. The areas with equal bony palate on both sides were present in 64.4% of the cases, and in 33.1% of the cases, bony palate was dominant on the right. The primary principle in reconstructive treatment should be describing geometrical forms and mathematical details of the bony palate. Three-dimensional reference values relative to the dental arch may increase the success of individual treatment of surgical procedures and reduce possible complications. With the help of certain software, this research has made possible to investigate the variability of the


    Directory of Open Access Journals (Sweden)

    I. F. Arshava


    Full Text Available An upsurge of interest in the implicit personality assessment, currently observed both in personality psycho-diagnostics and in experimental studies of social attitudes and prejudices, signals the shifting of researchers’ attention from de?ning between-person personality taxonomy to specifying comprehensive within-person processes, the dynamics of which can be captured at the level of an individual case. This research examines the possibility of the implicit assessment of the individual’s stability vs. susceptibility to failure stress by comparing the degrees of ef?cacy in the voluntary self-regulation of a computer-simulated information-processing activity under different conditions (patent of Ukraine № 91842, issued in 2010. By exposing two groups of participants (university undergraduates to processing the information, the scope of which exceeds the human short-term memory capacity at one of the stages of the modeled activity an unexpected and unavoidable failure is elicited. The participants who retain stability of their self-regulation behavior after having been exposed to failure, i.e. who keep processing information as effectively as they did prior to failure, are claimed to retain homeostasis and thus possess emotional stability. Those, who loose homeostasis after failure and display lower standards of self-regulation behavior, are considered to be susceptible to stress. The validity of the suggested type of the implicit diagnostics was empirically tested by clustering (K-means algorithm two samples of the participants on the  properties of their self-regulation behavior and testing between-cluster differences by a set of the explicitly assessed variables: Action control ef?cacy (Kuhl, 2001, preferred strategies of Coping with Stressful Situations (Endler, Parker, 1990,  Purpose-in-Life orientation (a Russian version of the test by Crumbaugh and Maholick, modi?ed by D.Leontiev, 1992, Psychological Well-being (Ryff, 1989

  17. Osteoid osteoma of the spine: a novel technique using combined computer-assisted and gamma probe-guided high-speed intralesional drill excision

    NARCIS (Netherlands)

    van Royen, B.J.; Baayen, J.C.; Pijpers, R.; Noske, D.P.; Schakenraad, D.; Wuisman, P.I.J.M.


    Study Design. A report of five cases of thoracolumbar osteoid osteoma treated with combined computer-assisted and γ probe-guided high-speed drill excision. Objectives. To document the surgical technique consisting of a combination of both computer-assisted and γ probe-guided high-speed drill

  18. Comparison of equilibrium and non-equilibrium distribution coefficients for the human drug carbamazepine (United States)

    The distribution coefficient (KD) for the human drug carbamazepine was measured using a non-equilibrium technique. Repacked soil columns were prepared using an Airport silt loam (Typic Natrustalf) with an average organic matter content of 2.45%. Carbamazepine solutions were then leached through th...


    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...


    CERN Multimedia

    I. Fisk


    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...